2026-03-19 01:16:40.600682 | Job console starting 2026-03-19 01:16:40.611769 | Updating git repos 2026-03-19 01:16:40.686665 | Cloning repos into workspace 2026-03-19 01:16:40.946710 | Restoring repo states 2026-03-19 01:16:40.967725 | Merging changes 2026-03-19 01:16:40.967750 | Checking out repos 2026-03-19 01:16:41.219699 | Preparing playbooks 2026-03-19 01:16:41.852142 | Running Ansible setup 2026-03-19 01:16:46.268336 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-19 01:16:47.046538 | 2026-03-19 01:16:47.046701 | PLAY [Base pre] 2026-03-19 01:16:47.063798 | 2026-03-19 01:16:47.063941 | TASK [Setup log path fact] 2026-03-19 01:16:47.094706 | orchestrator | ok 2026-03-19 01:16:47.113185 | 2026-03-19 01:16:47.113343 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-19 01:16:47.160717 | orchestrator | ok 2026-03-19 01:16:47.175594 | 2026-03-19 01:16:47.175729 | TASK [emit-job-header : Print job information] 2026-03-19 01:16:47.226568 | # Job Information 2026-03-19 01:16:47.226767 | Ansible Version: 2.16.14 2026-03-19 01:16:47.226804 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-19 01:16:47.226863 | Pipeline: periodic-midnight 2026-03-19 01:16:47.226889 | Executor: 521e9411259a 2026-03-19 01:16:47.226969 | Triggered by: https://github.com/osism/testbed 2026-03-19 01:16:47.226998 | Event ID: 85a5e9a9786347a19925be0ce68d76d6 2026-03-19 01:16:47.233882 | 2026-03-19 01:16:47.234001 | LOOP [emit-job-header : Print node information] 2026-03-19 01:16:47.367710 | orchestrator | ok: 2026-03-19 01:16:47.368014 | orchestrator | # Node Information 2026-03-19 01:16:47.368090 | orchestrator | Inventory Hostname: orchestrator 2026-03-19 01:16:47.368135 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-19 01:16:47.368175 | orchestrator | Username: zuul-testbed05 2026-03-19 01:16:47.368213 | orchestrator | Distro: Debian 12.13 2026-03-19 01:16:47.368255 | orchestrator | Provider: static-testbed 2026-03-19 01:16:47.368293 | orchestrator | Region: 2026-03-19 01:16:47.368329 | orchestrator | Label: testbed-orchestrator 2026-03-19 01:16:47.368365 | orchestrator | Product Name: OpenStack Nova 2026-03-19 01:16:47.368400 | orchestrator | Interface IP: 81.163.193.140 2026-03-19 01:16:47.396103 | 2026-03-19 01:16:47.396271 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-19 01:16:47.899687 | orchestrator -> localhost | changed 2026-03-19 01:16:47.915475 | 2026-03-19 01:16:47.915632 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-19 01:16:49.024866 | orchestrator -> localhost | changed 2026-03-19 01:16:49.047740 | 2026-03-19 01:16:49.047886 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-19 01:16:49.347664 | orchestrator -> localhost | ok 2026-03-19 01:16:49.364990 | 2026-03-19 01:16:49.365185 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-19 01:16:49.413549 | orchestrator | ok 2026-03-19 01:16:49.433898 | orchestrator | included: /var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-19 01:16:49.442561 | 2026-03-19 01:16:49.442673 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-19 01:16:50.412360 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-19 01:16:50.413116 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/8fffc4fdde5e43cd9dfdb6b2ab020e89_id_rsa 2026-03-19 01:16:50.413255 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/8fffc4fdde5e43cd9dfdb6b2ab020e89_id_rsa.pub 2026-03-19 01:16:50.413338 | orchestrator -> localhost | The key fingerprint is: 2026-03-19 01:16:50.413413 | orchestrator -> localhost | SHA256:gZcStSXAMQIu7avtb4G+u+748Yno7J6LWyW3WpXn1vw zuul-build-sshkey 2026-03-19 01:16:50.413480 | orchestrator -> localhost | The key's randomart image is: 2026-03-19 01:16:50.413578 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-19 01:16:50.413645 | orchestrator -> localhost | | ....=+o . | 2026-03-19 01:16:50.413710 | orchestrator -> localhost | | o ..+ = | 2026-03-19 01:16:50.413791 | orchestrator -> localhost | |. o o = | 2026-03-19 01:16:50.413890 | orchestrator -> localhost | | o + . | 2026-03-19 01:16:50.413989 | orchestrator -> localhost | | o.o o S | 2026-03-19 01:16:50.414134 | orchestrator -> localhost | | .=.o o o | 2026-03-19 01:16:50.414406 | orchestrator -> localhost | | .+ o. o o | 2026-03-19 01:16:50.414505 | orchestrator -> localhost | |o*o*.. . . | 2026-03-19 01:16:50.414590 | orchestrator -> localhost | |X&&*+ E | 2026-03-19 01:16:50.414697 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-19 01:16:50.414999 | orchestrator -> localhost | ok: Runtime: 0:00:00.457271 2026-03-19 01:16:50.430882 | 2026-03-19 01:16:50.431085 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-19 01:16:50.471782 | orchestrator | ok 2026-03-19 01:16:50.486065 | orchestrator | included: /var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-19 01:16:50.495585 | 2026-03-19 01:16:50.495688 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-19 01:16:50.519604 | orchestrator | skipping: Conditional result was False 2026-03-19 01:16:50.535098 | 2026-03-19 01:16:50.535240 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-19 01:16:51.173361 | orchestrator | changed 2026-03-19 01:16:51.183009 | 2026-03-19 01:16:51.183158 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-19 01:16:51.500080 | orchestrator | ok 2026-03-19 01:16:51.510947 | 2026-03-19 01:16:51.511207 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-19 01:16:51.966340 | orchestrator | ok 2026-03-19 01:16:51.974459 | 2026-03-19 01:16:51.974585 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-19 01:16:52.440085 | orchestrator | ok 2026-03-19 01:16:52.449947 | 2026-03-19 01:16:52.450135 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-19 01:16:52.475462 | orchestrator | skipping: Conditional result was False 2026-03-19 01:16:52.488648 | 2026-03-19 01:16:52.488788 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-19 01:16:52.960585 | orchestrator -> localhost | changed 2026-03-19 01:16:52.987083 | 2026-03-19 01:16:52.987249 | TASK [add-build-sshkey : Add back temp key] 2026-03-19 01:16:53.336935 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/8fffc4fdde5e43cd9dfdb6b2ab020e89_id_rsa (zuul-build-sshkey) 2026-03-19 01:16:53.337521 | orchestrator -> localhost | ok: Runtime: 0:00:00.017170 2026-03-19 01:16:53.353295 | 2026-03-19 01:16:53.353460 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-19 01:16:53.809864 | orchestrator | ok 2026-03-19 01:16:53.821079 | 2026-03-19 01:16:53.821269 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-19 01:16:53.856712 | orchestrator | skipping: Conditional result was False 2026-03-19 01:16:53.915885 | 2026-03-19 01:16:53.916019 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-19 01:16:54.360293 | orchestrator | ok 2026-03-19 01:16:54.375220 | 2026-03-19 01:16:54.375343 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-19 01:16:54.419109 | orchestrator | ok 2026-03-19 01:16:54.429085 | 2026-03-19 01:16:54.429204 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-19 01:16:54.746142 | orchestrator -> localhost | ok 2026-03-19 01:16:54.754043 | 2026-03-19 01:16:54.754164 | TASK [validate-host : Collect information about the host] 2026-03-19 01:16:56.571712 | orchestrator | ok 2026-03-19 01:16:56.594900 | 2026-03-19 01:16:56.595092 | TASK [validate-host : Sanitize hostname] 2026-03-19 01:16:56.656588 | orchestrator | ok 2026-03-19 01:16:56.663307 | 2026-03-19 01:16:56.663418 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-19 01:16:57.294644 | orchestrator -> localhost | changed 2026-03-19 01:16:57.303381 | 2026-03-19 01:16:57.303548 | TASK [validate-host : Collect information about zuul worker] 2026-03-19 01:16:57.760481 | orchestrator | ok 2026-03-19 01:16:57.769532 | 2026-03-19 01:16:57.769692 | TASK [validate-host : Write out all zuul information for each host] 2026-03-19 01:16:58.331721 | orchestrator -> localhost | changed 2026-03-19 01:16:58.345479 | 2026-03-19 01:16:58.345623 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-19 01:16:58.669585 | orchestrator | ok 2026-03-19 01:16:58.679788 | 2026-03-19 01:16:58.679927 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-19 01:17:16.251650 | orchestrator | changed: 2026-03-19 01:17:16.252246 | orchestrator | .d..t...... src/ 2026-03-19 01:17:16.252375 | orchestrator | .d..t...... src/github.com/ 2026-03-19 01:17:16.252450 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-19 01:17:16.252513 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-19 01:17:16.252576 | orchestrator | RedHat.yml 2026-03-19 01:17:16.278078 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-19 01:17:16.278104 | orchestrator | RedHat.yml 2026-03-19 01:17:16.278200 | orchestrator | = 2.2.0"... 2026-03-19 01:17:26.505873 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-19 01:17:26.524515 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-19 01:17:26.675473 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-19 01:17:27.390443 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-19 01:17:27.805459 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-19 01:17:28.440728 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-19 01:17:28.507585 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-19 01:17:29.001574 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-19 01:17:29.001685 | orchestrator | 2026-03-19 01:17:29.001693 | orchestrator | Providers are signed by their developers. 2026-03-19 01:17:29.001698 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-19 01:17:29.001710 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-19 01:17:29.001748 | orchestrator | 2026-03-19 01:17:29.001754 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-19 01:17:29.001759 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-19 01:17:29.001772 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-19 01:17:29.001783 | orchestrator | you run "tofu init" in the future. 2026-03-19 01:17:29.002275 | orchestrator | 2026-03-19 01:17:29.002338 | orchestrator | OpenTofu has been successfully initialized! 2026-03-19 01:17:29.002375 | orchestrator | 2026-03-19 01:17:29.002384 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-19 01:17:29.002392 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-19 01:17:29.002398 | orchestrator | should now work. 2026-03-19 01:17:29.002404 | orchestrator | 2026-03-19 01:17:29.002410 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-19 01:17:29.002416 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-19 01:17:29.002432 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-19 01:17:29.170163 | orchestrator | Created and switched to workspace "ci"! 2026-03-19 01:17:29.170233 | orchestrator | 2026-03-19 01:17:29.170240 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-19 01:17:29.170246 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-19 01:17:29.170251 | orchestrator | for this configuration. 2026-03-19 01:17:29.329904 | orchestrator | ci.auto.tfvars 2026-03-19 01:17:29.337755 | orchestrator | default_custom.tf 2026-03-19 01:17:30.441124 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-19 01:17:30.983670 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-19 01:17:31.418126 | orchestrator | 2026-03-19 01:17:31.418224 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-19 01:17:31.418238 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-19 01:17:31.418247 | orchestrator | + create 2026-03-19 01:17:31.418257 | orchestrator | <= read (data resources) 2026-03-19 01:17:31.418270 | orchestrator | 2026-03-19 01:17:31.418282 | orchestrator | OpenTofu will perform the following actions: 2026-03-19 01:17:31.418304 | orchestrator | 2026-03-19 01:17:31.418315 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-19 01:17:31.418326 | orchestrator | # (config refers to values not yet known) 2026-03-19 01:17:31.418338 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-19 01:17:31.418350 | orchestrator | + checksum = (known after apply) 2026-03-19 01:17:31.418361 | orchestrator | + created_at = (known after apply) 2026-03-19 01:17:31.418372 | orchestrator | + file = (known after apply) 2026-03-19 01:17:31.418383 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.418427 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.418438 | orchestrator | + min_disk_gb = (known after apply) 2026-03-19 01:17:31.418450 | orchestrator | + min_ram_mb = (known after apply) 2026-03-19 01:17:31.418460 | orchestrator | + most_recent = true 2026-03-19 01:17:31.418472 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.418484 | orchestrator | + protected = (known after apply) 2026-03-19 01:17:31.418495 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.418510 | orchestrator | + schema = (known after apply) 2026-03-19 01:17:31.418523 | orchestrator | + size_bytes = (known after apply) 2026-03-19 01:17:31.418535 | orchestrator | + tags = (known after apply) 2026-03-19 01:17:31.418547 | orchestrator | + updated_at = (known after apply) 2026-03-19 01:17:31.418559 | orchestrator | } 2026-03-19 01:17:31.418571 | orchestrator | 2026-03-19 01:17:31.418583 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-19 01:17:31.418626 | orchestrator | # (config refers to values not yet known) 2026-03-19 01:17:31.418643 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-19 01:17:31.418655 | orchestrator | + checksum = (known after apply) 2026-03-19 01:17:31.418667 | orchestrator | + created_at = (known after apply) 2026-03-19 01:17:31.418680 | orchestrator | + file = (known after apply) 2026-03-19 01:17:31.418691 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.418704 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.418713 | orchestrator | + min_disk_gb = (known after apply) 2026-03-19 01:17:31.418720 | orchestrator | + min_ram_mb = (known after apply) 2026-03-19 01:17:31.418727 | orchestrator | + most_recent = true 2026-03-19 01:17:31.418735 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.418742 | orchestrator | + protected = (known after apply) 2026-03-19 01:17:31.418750 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.418757 | orchestrator | + schema = (known after apply) 2026-03-19 01:17:31.418764 | orchestrator | + size_bytes = (known after apply) 2026-03-19 01:17:31.418771 | orchestrator | + tags = (known after apply) 2026-03-19 01:17:31.418778 | orchestrator | + updated_at = (known after apply) 2026-03-19 01:17:31.418786 | orchestrator | } 2026-03-19 01:17:31.418793 | orchestrator | 2026-03-19 01:17:31.418800 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-19 01:17:31.418808 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-19 01:17:31.418815 | orchestrator | + content = (known after apply) 2026-03-19 01:17:31.418823 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 01:17:31.418831 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 01:17:31.418838 | orchestrator | + content_md5 = (known after apply) 2026-03-19 01:17:31.418845 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 01:17:31.418852 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 01:17:31.418860 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 01:17:31.418867 | orchestrator | + directory_permission = "0777" 2026-03-19 01:17:31.418874 | orchestrator | + file_permission = "0644" 2026-03-19 01:17:31.418883 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-19 01:17:31.418894 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.418913 | orchestrator | } 2026-03-19 01:17:31.418926 | orchestrator | 2026-03-19 01:17:31.418938 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-19 01:17:31.418949 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-19 01:17:31.418959 | orchestrator | + content = (known after apply) 2026-03-19 01:17:31.418970 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 01:17:31.418982 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 01:17:31.418994 | orchestrator | + content_md5 = (known after apply) 2026-03-19 01:17:31.419005 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 01:17:31.419016 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 01:17:31.419028 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 01:17:31.419039 | orchestrator | + directory_permission = "0777" 2026-03-19 01:17:31.419050 | orchestrator | + file_permission = "0644" 2026-03-19 01:17:31.419077 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-19 01:17:31.419089 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419100 | orchestrator | } 2026-03-19 01:17:31.419110 | orchestrator | 2026-03-19 01:17:31.419133 | orchestrator | # local_file.inventory will be created 2026-03-19 01:17:31.419145 | orchestrator | + resource "local_file" "inventory" { 2026-03-19 01:17:31.419156 | orchestrator | + content = (known after apply) 2026-03-19 01:17:31.419167 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 01:17:31.419178 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 01:17:31.419190 | orchestrator | + content_md5 = (known after apply) 2026-03-19 01:17:31.419202 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 01:17:31.419216 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 01:17:31.419227 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 01:17:31.419239 | orchestrator | + directory_permission = "0777" 2026-03-19 01:17:31.419252 | orchestrator | + file_permission = "0644" 2026-03-19 01:17:31.419264 | orchestrator | + filename = "inventory.ci" 2026-03-19 01:17:31.419276 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419288 | orchestrator | } 2026-03-19 01:17:31.419300 | orchestrator | 2026-03-19 01:17:31.419313 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-19 01:17:31.419328 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-19 01:17:31.419345 | orchestrator | + content = (sensitive value) 2026-03-19 01:17:31.419356 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-19 01:17:31.419367 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-19 01:17:31.419378 | orchestrator | + content_md5 = (known after apply) 2026-03-19 01:17:31.419388 | orchestrator | + content_sha1 = (known after apply) 2026-03-19 01:17:31.419399 | orchestrator | + content_sha256 = (known after apply) 2026-03-19 01:17:31.419432 | orchestrator | + content_sha512 = (known after apply) 2026-03-19 01:17:31.419446 | orchestrator | + directory_permission = "0700" 2026-03-19 01:17:31.419458 | orchestrator | + file_permission = "0600" 2026-03-19 01:17:31.419470 | orchestrator | + filename = ".id_rsa.ci" 2026-03-19 01:17:31.419483 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419491 | orchestrator | } 2026-03-19 01:17:31.419499 | orchestrator | 2026-03-19 01:17:31.419506 | orchestrator | # null_resource.node_semaphore will be created 2026-03-19 01:17:31.419514 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-19 01:17:31.419521 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419529 | orchestrator | } 2026-03-19 01:17:31.419536 | orchestrator | 2026-03-19 01:17:31.419543 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-19 01:17:31.419551 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-19 01:17:31.419558 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.419565 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.419572 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419585 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.419628 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.419644 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-19 01:17:31.419656 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.419667 | orchestrator | + size = 80 2026-03-19 01:17:31.419678 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.419689 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.419698 | orchestrator | } 2026-03-19 01:17:31.419709 | orchestrator | 2026-03-19 01:17:31.419719 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-19 01:17:31.419730 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.419740 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.419750 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.419762 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419786 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.419797 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.419809 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-19 01:17:31.419820 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.419832 | orchestrator | + size = 80 2026-03-19 01:17:31.419845 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.419857 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.419869 | orchestrator | } 2026-03-19 01:17:31.419881 | orchestrator | 2026-03-19 01:17:31.419890 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-19 01:17:31.419897 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.419904 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.419911 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.419918 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.419926 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.419933 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.419940 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-19 01:17:31.419947 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.419955 | orchestrator | + size = 80 2026-03-19 01:17:31.419962 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.419969 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.419976 | orchestrator | } 2026-03-19 01:17:31.419988 | orchestrator | 2026-03-19 01:17:31.420006 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-19 01:17:31.420019 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.420030 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420043 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420056 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420068 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.420080 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420093 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-19 01:17:31.420101 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420108 | orchestrator | + size = 80 2026-03-19 01:17:31.420115 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420122 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420130 | orchestrator | } 2026-03-19 01:17:31.420137 | orchestrator | 2026-03-19 01:17:31.420144 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-19 01:17:31.420151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.420159 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420166 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420175 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420187 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.420198 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420219 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-19 01:17:31.420230 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420242 | orchestrator | + size = 80 2026-03-19 01:17:31.420254 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420266 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420279 | orchestrator | } 2026-03-19 01:17:31.420291 | orchestrator | 2026-03-19 01:17:31.420303 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-19 01:17:31.420315 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.420328 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420339 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420351 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420373 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.420385 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420398 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-19 01:17:31.420409 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420419 | orchestrator | + size = 80 2026-03-19 01:17:31.420432 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420444 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420457 | orchestrator | } 2026-03-19 01:17:31.420468 | orchestrator | 2026-03-19 01:17:31.420480 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-19 01:17:31.420506 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-19 01:17:31.420519 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420531 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420543 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420555 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.420567 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420580 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-19 01:17:31.420592 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420629 | orchestrator | + size = 80 2026-03-19 01:17:31.420641 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420653 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420664 | orchestrator | } 2026-03-19 01:17:31.420675 | orchestrator | 2026-03-19 01:17:31.420687 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-19 01:17:31.420700 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.420713 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420724 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420737 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420749 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420761 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-19 01:17:31.420773 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420785 | orchestrator | + size = 20 2026-03-19 01:17:31.420797 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420809 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420821 | orchestrator | } 2026-03-19 01:17:31.420833 | orchestrator | 2026-03-19 01:17:31.420845 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-19 01:17:31.420858 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.420870 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.420882 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.420894 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.420906 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.420919 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-19 01:17:31.420931 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.420942 | orchestrator | + size = 20 2026-03-19 01:17:31.420955 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.420967 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.420979 | orchestrator | } 2026-03-19 01:17:31.420991 | orchestrator | 2026-03-19 01:17:31.421004 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-19 01:17:31.421016 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421027 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421039 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421051 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421064 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421076 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-19 01:17:31.421088 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421111 | orchestrator | + size = 20 2026-03-19 01:17:31.421124 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421136 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421147 | orchestrator | } 2026-03-19 01:17:31.421160 | orchestrator | 2026-03-19 01:17:31.421172 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-19 01:17:31.421184 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421196 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421208 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421220 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421232 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421244 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-19 01:17:31.421256 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421269 | orchestrator | + size = 20 2026-03-19 01:17:31.421281 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421293 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421306 | orchestrator | } 2026-03-19 01:17:31.421315 | orchestrator | 2026-03-19 01:17:31.421322 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-19 01:17:31.421329 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421337 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421344 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421352 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421359 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421366 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-19 01:17:31.421373 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421387 | orchestrator | + size = 20 2026-03-19 01:17:31.421395 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421402 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421410 | orchestrator | } 2026-03-19 01:17:31.421417 | orchestrator | 2026-03-19 01:17:31.421424 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-19 01:17:31.421432 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421439 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421446 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421453 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421461 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421468 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-19 01:17:31.421475 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421482 | orchestrator | + size = 20 2026-03-19 01:17:31.421490 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421497 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421504 | orchestrator | } 2026-03-19 01:17:31.421511 | orchestrator | 2026-03-19 01:17:31.421519 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-19 01:17:31.421526 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421533 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421540 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421548 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421570 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421577 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-19 01:17:31.421584 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421591 | orchestrator | + size = 20 2026-03-19 01:17:31.421623 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421633 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421641 | orchestrator | } 2026-03-19 01:17:31.421648 | orchestrator | 2026-03-19 01:17:31.421655 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-19 01:17:31.421662 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421678 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421686 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421693 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421701 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421708 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-19 01:17:31.421715 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421722 | orchestrator | + size = 20 2026-03-19 01:17:31.421730 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421737 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421744 | orchestrator | } 2026-03-19 01:17:31.421751 | orchestrator | 2026-03-19 01:17:31.421759 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-19 01:17:31.421766 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-19 01:17:31.421773 | orchestrator | + attachment = (known after apply) 2026-03-19 01:17:31.421780 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421787 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421794 | orchestrator | + metadata = (known after apply) 2026-03-19 01:17:31.421801 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-19 01:17:31.421809 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421816 | orchestrator | + size = 20 2026-03-19 01:17:31.421823 | orchestrator | + volume_retype_policy = "never" 2026-03-19 01:17:31.421830 | orchestrator | + volume_type = "ssd" 2026-03-19 01:17:31.421837 | orchestrator | } 2026-03-19 01:17:31.421844 | orchestrator | 2026-03-19 01:17:31.421851 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-19 01:17:31.421859 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-19 01:17:31.421866 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.421873 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.421880 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.421888 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.421895 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.421902 | orchestrator | + config_drive = true 2026-03-19 01:17:31.421909 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.421916 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.421924 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-19 01:17:31.421931 | orchestrator | + force_delete = false 2026-03-19 01:17:31.421938 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.421945 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.421953 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.421960 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.421967 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.421974 | orchestrator | + name = "testbed-manager" 2026-03-19 01:17:31.421982 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.421989 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.421997 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.422004 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.422011 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.422069 | orchestrator | + user_data = (sensitive value) 2026-03-19 01:17:31.422084 | orchestrator | 2026-03-19 01:17:31.422095 | orchestrator | + block_device { 2026-03-19 01:17:31.422107 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.422117 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.422135 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.422146 | orchestrator | + multiattach = false 2026-03-19 01:17:31.422157 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.422167 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.422187 | orchestrator | } 2026-03-19 01:17:31.422198 | orchestrator | 2026-03-19 01:17:31.422208 | orchestrator | + network { 2026-03-19 01:17:31.422220 | orchestrator | + access_network = false 2026-03-19 01:17:31.422230 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.422242 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.422253 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.422263 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.422274 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.422285 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.422297 | orchestrator | } 2026-03-19 01:17:31.422308 | orchestrator | } 2026-03-19 01:17:31.422320 | orchestrator | 2026-03-19 01:17:31.422331 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-19 01:17:31.422343 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.422356 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.422367 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.422379 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.422386 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.422393 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.422400 | orchestrator | + config_drive = true 2026-03-19 01:17:31.422407 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.422415 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.422422 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.422429 | orchestrator | + force_delete = false 2026-03-19 01:17:31.422436 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.422443 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.422451 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.422458 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.422465 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.422472 | orchestrator | + name = "testbed-node-0" 2026-03-19 01:17:31.422479 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.422496 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.422504 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.422511 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.422518 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.422526 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.422533 | orchestrator | 2026-03-19 01:17:31.422540 | orchestrator | + block_device { 2026-03-19 01:17:31.422548 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.422555 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.422562 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.422569 | orchestrator | + multiattach = false 2026-03-19 01:17:31.422577 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.422589 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.422619 | orchestrator | } 2026-03-19 01:17:31.422637 | orchestrator | 2026-03-19 01:17:31.422649 | orchestrator | + network { 2026-03-19 01:17:31.422660 | orchestrator | + access_network = false 2026-03-19 01:17:31.422672 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.422684 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.422694 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.422704 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.422716 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.422728 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.422740 | orchestrator | } 2026-03-19 01:17:31.422753 | orchestrator | } 2026-03-19 01:17:31.422765 | orchestrator | 2026-03-19 01:17:31.422777 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-19 01:17:31.422788 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.422795 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.422811 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.422818 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.422825 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.422832 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.422839 | orchestrator | + config_drive = true 2026-03-19 01:17:31.422847 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.422854 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.422861 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.422868 | orchestrator | + force_delete = false 2026-03-19 01:17:31.422876 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.422883 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.422890 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.422897 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.422905 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.422912 | orchestrator | + name = "testbed-node-1" 2026-03-19 01:17:31.422919 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.422926 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.422934 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.422941 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.422948 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.422955 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.422963 | orchestrator | 2026-03-19 01:17:31.422970 | orchestrator | + block_device { 2026-03-19 01:17:31.422977 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.422985 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.422992 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.422999 | orchestrator | + multiattach = false 2026-03-19 01:17:31.423006 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.423013 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.423020 | orchestrator | } 2026-03-19 01:17:31.423028 | orchestrator | 2026-03-19 01:17:31.423035 | orchestrator | + network { 2026-03-19 01:17:31.423043 | orchestrator | + access_network = false 2026-03-19 01:17:31.423054 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.423069 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.423087 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.423097 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.423109 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.423119 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.423131 | orchestrator | } 2026-03-19 01:17:31.423142 | orchestrator | } 2026-03-19 01:17:31.423151 | orchestrator | 2026-03-19 01:17:31.423161 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-19 01:17:31.423172 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.423182 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.423193 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.423206 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.423217 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.423248 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.423261 | orchestrator | + config_drive = true 2026-03-19 01:17:31.423272 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.423280 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.423287 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.423294 | orchestrator | + force_delete = false 2026-03-19 01:17:31.423301 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.423308 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.423316 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.423331 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.423338 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.423346 | orchestrator | + name = "testbed-node-2" 2026-03-19 01:17:31.423352 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.423360 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.423367 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.423374 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.423381 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.423393 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.423404 | orchestrator | 2026-03-19 01:17:31.423421 | orchestrator | + block_device { 2026-03-19 01:17:31.423436 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.423448 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.423459 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.423482 | orchestrator | + multiattach = false 2026-03-19 01:17:31.423494 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.423506 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.423519 | orchestrator | } 2026-03-19 01:17:31.423532 | orchestrator | 2026-03-19 01:17:31.423543 | orchestrator | + network { 2026-03-19 01:17:31.423557 | orchestrator | + access_network = false 2026-03-19 01:17:31.423569 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.423580 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.423592 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.423638 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.423651 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.423663 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.423675 | orchestrator | } 2026-03-19 01:17:31.423687 | orchestrator | } 2026-03-19 01:17:31.423698 | orchestrator | 2026-03-19 01:17:31.423711 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-19 01:17:31.423719 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.423726 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.423733 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.423741 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.423748 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.423755 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.423762 | orchestrator | + config_drive = true 2026-03-19 01:17:31.423774 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.423786 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.423798 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.423809 | orchestrator | + force_delete = false 2026-03-19 01:17:31.423820 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.423830 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.423842 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.423853 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.423864 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.423876 | orchestrator | + name = "testbed-node-3" 2026-03-19 01:17:31.423887 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.423898 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.423910 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.423922 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.423935 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.423946 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.423959 | orchestrator | 2026-03-19 01:17:31.423967 | orchestrator | + block_device { 2026-03-19 01:17:31.423981 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.423989 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.423996 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.424011 | orchestrator | + multiattach = false 2026-03-19 01:17:31.424018 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.424025 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424032 | orchestrator | } 2026-03-19 01:17:31.424040 | orchestrator | 2026-03-19 01:17:31.424047 | orchestrator | + network { 2026-03-19 01:17:31.424054 | orchestrator | + access_network = false 2026-03-19 01:17:31.424061 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.424068 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.424075 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.424082 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.424090 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.424097 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424104 | orchestrator | } 2026-03-19 01:17:31.424111 | orchestrator | } 2026-03-19 01:17:31.424118 | orchestrator | 2026-03-19 01:17:31.424126 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-19 01:17:31.424133 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.424140 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.424147 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.424155 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.424162 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.424169 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.424176 | orchestrator | + config_drive = true 2026-03-19 01:17:31.424183 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.424190 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.424198 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.424205 | orchestrator | + force_delete = false 2026-03-19 01:17:31.424212 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.424219 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.424226 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.424236 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.424250 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.424268 | orchestrator | + name = "testbed-node-4" 2026-03-19 01:17:31.424280 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.424291 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.424303 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.424313 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.424323 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.424335 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.424346 | orchestrator | 2026-03-19 01:17:31.424358 | orchestrator | + block_device { 2026-03-19 01:17:31.424371 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.424383 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.424395 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.424407 | orchestrator | + multiattach = false 2026-03-19 01:17:31.424418 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.424425 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424433 | orchestrator | } 2026-03-19 01:17:31.424440 | orchestrator | 2026-03-19 01:17:31.424448 | orchestrator | + network { 2026-03-19 01:17:31.424455 | orchestrator | + access_network = false 2026-03-19 01:17:31.424462 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.424469 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.424476 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.424484 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.424491 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.424507 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424515 | orchestrator | } 2026-03-19 01:17:31.424522 | orchestrator | } 2026-03-19 01:17:31.424537 | orchestrator | 2026-03-19 01:17:31.424545 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-19 01:17:31.424552 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-19 01:17:31.424560 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-19 01:17:31.424567 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-19 01:17:31.424575 | orchestrator | + all_metadata = (known after apply) 2026-03-19 01:17:31.424587 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.424623 | orchestrator | + availability_zone = "nova" 2026-03-19 01:17:31.424636 | orchestrator | + config_drive = true 2026-03-19 01:17:31.424648 | orchestrator | + created = (known after apply) 2026-03-19 01:17:31.424660 | orchestrator | + flavor_id = (known after apply) 2026-03-19 01:17:31.424672 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-19 01:17:31.424685 | orchestrator | + force_delete = false 2026-03-19 01:17:31.424703 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-19 01:17:31.424714 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.424722 | orchestrator | + image_id = (known after apply) 2026-03-19 01:17:31.424729 | orchestrator | + image_name = (known after apply) 2026-03-19 01:17:31.424736 | orchestrator | + key_pair = "testbed" 2026-03-19 01:17:31.424744 | orchestrator | + name = "testbed-node-5" 2026-03-19 01:17:31.424751 | orchestrator | + power_state = "active" 2026-03-19 01:17:31.424758 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.424765 | orchestrator | + security_groups = (known after apply) 2026-03-19 01:17:31.424773 | orchestrator | + stop_before_destroy = false 2026-03-19 01:17:31.424780 | orchestrator | + updated = (known after apply) 2026-03-19 01:17:31.424787 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-19 01:17:31.424794 | orchestrator | 2026-03-19 01:17:31.424802 | orchestrator | + block_device { 2026-03-19 01:17:31.424809 | orchestrator | + boot_index = 0 2026-03-19 01:17:31.424816 | orchestrator | + delete_on_termination = false 2026-03-19 01:17:31.424823 | orchestrator | + destination_type = "volume" 2026-03-19 01:17:31.424835 | orchestrator | + multiattach = false 2026-03-19 01:17:31.424846 | orchestrator | + source_type = "volume" 2026-03-19 01:17:31.424857 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424867 | orchestrator | } 2026-03-19 01:17:31.424878 | orchestrator | 2026-03-19 01:17:31.424888 | orchestrator | + network { 2026-03-19 01:17:31.424899 | orchestrator | + access_network = false 2026-03-19 01:17:31.424909 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-19 01:17:31.424919 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-19 01:17:31.424930 | orchestrator | + mac = (known after apply) 2026-03-19 01:17:31.424941 | orchestrator | + name = (known after apply) 2026-03-19 01:17:31.424952 | orchestrator | + port = (known after apply) 2026-03-19 01:17:31.424963 | orchestrator | + uuid = (known after apply) 2026-03-19 01:17:31.424975 | orchestrator | } 2026-03-19 01:17:31.424988 | orchestrator | } 2026-03-19 01:17:31.424999 | orchestrator | 2026-03-19 01:17:31.425012 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-19 01:17:31.425022 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-19 01:17:31.425029 | orchestrator | + fingerprint = (known after apply) 2026-03-19 01:17:31.425036 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425043 | orchestrator | + name = "testbed" 2026-03-19 01:17:31.425050 | orchestrator | + private_key = (sensitive value) 2026-03-19 01:17:31.425057 | orchestrator | + public_key = (known after apply) 2026-03-19 01:17:31.425065 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425072 | orchestrator | + user_id = (known after apply) 2026-03-19 01:17:31.425079 | orchestrator | } 2026-03-19 01:17:31.425086 | orchestrator | 2026-03-19 01:17:31.425094 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-19 01:17:31.425101 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425117 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425124 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425132 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425139 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425146 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425153 | orchestrator | } 2026-03-19 01:17:31.425160 | orchestrator | 2026-03-19 01:17:31.425168 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-19 01:17:31.425175 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425182 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425189 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425196 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425203 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425210 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425217 | orchestrator | } 2026-03-19 01:17:31.425225 | orchestrator | 2026-03-19 01:17:31.425232 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-19 01:17:31.425239 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425246 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425253 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425261 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425268 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425275 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425282 | orchestrator | } 2026-03-19 01:17:31.425289 | orchestrator | 2026-03-19 01:17:31.425296 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-19 01:17:31.425304 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425311 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425318 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425325 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425332 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425339 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425346 | orchestrator | } 2026-03-19 01:17:31.425353 | orchestrator | 2026-03-19 01:17:31.425361 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-19 01:17:31.425368 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425376 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425388 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425406 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425426 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425446 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425458 | orchestrator | } 2026-03-19 01:17:31.425468 | orchestrator | 2026-03-19 01:17:31.425479 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-19 01:17:31.425489 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425499 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425511 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425521 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425532 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425542 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425553 | orchestrator | } 2026-03-19 01:17:31.425563 | orchestrator | 2026-03-19 01:17:31.425574 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-19 01:17:31.425585 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425615 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425629 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425641 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425653 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425673 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425685 | orchestrator | } 2026-03-19 01:17:31.425695 | orchestrator | 2026-03-19 01:17:31.425706 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-19 01:17:31.425716 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425727 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425739 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425751 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425762 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425773 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425786 | orchestrator | } 2026-03-19 01:17:31.425798 | orchestrator | 2026-03-19 01:17:31.425810 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-19 01:17:31.425822 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-19 01:17:31.425833 | orchestrator | + device = (known after apply) 2026-03-19 01:17:31.425840 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425848 | orchestrator | + instance_id = (known after apply) 2026-03-19 01:17:31.425855 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425867 | orchestrator | + volume_id = (known after apply) 2026-03-19 01:17:31.425878 | orchestrator | } 2026-03-19 01:17:31.425888 | orchestrator | 2026-03-19 01:17:31.425904 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-19 01:17:31.425920 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-19 01:17:31.425931 | orchestrator | + fixed_ip = (known after apply) 2026-03-19 01:17:31.425942 | orchestrator | + floating_ip = (known after apply) 2026-03-19 01:17:31.425953 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.425965 | orchestrator | + port_id = (known after apply) 2026-03-19 01:17:31.425977 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.425989 | orchestrator | } 2026-03-19 01:17:31.425998 | orchestrator | 2026-03-19 01:17:31.426006 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-19 01:17:31.426040 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-19 01:17:31.426050 | orchestrator | + address = (known after apply) 2026-03-19 01:17:31.426057 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.426064 | orchestrator | + dns_domain = (known after apply) 2026-03-19 01:17:31.426071 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.426078 | orchestrator | + fixed_ip = (known after apply) 2026-03-19 01:17:31.426085 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.426093 | orchestrator | + pool = "public" 2026-03-19 01:17:31.426100 | orchestrator | + port_id = (known after apply) 2026-03-19 01:17:31.426107 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.426115 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.426122 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.426129 | orchestrator | } 2026-03-19 01:17:31.426137 | orchestrator | 2026-03-19 01:17:31.426144 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-19 01:17:31.426151 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-19 01:17:31.426159 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.426166 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.426173 | orchestrator | + availability_zone_hints = [ 2026-03-19 01:17:31.426180 | orchestrator | + "nova", 2026-03-19 01:17:31.426188 | orchestrator | ] 2026-03-19 01:17:31.426195 | orchestrator | + dns_domain = (known after apply) 2026-03-19 01:17:31.426202 | orchestrator | + external = (known after apply) 2026-03-19 01:17:31.426209 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.426216 | orchestrator | + mtu = (known after apply) 2026-03-19 01:17:31.426224 | orchestrator | + name = "net-testbed-management" 2026-03-19 01:17:31.426231 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.426246 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.426254 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.426261 | orchestrator | + shared = (known after apply) 2026-03-19 01:17:31.426268 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.426276 | orchestrator | + transparent_vlan = (known after apply) 2026-03-19 01:17:31.426283 | orchestrator | 2026-03-19 01:17:31.426290 | orchestrator | + segments (known after apply) 2026-03-19 01:17:31.426298 | orchestrator | } 2026-03-19 01:17:31.426305 | orchestrator | 2026-03-19 01:17:31.426312 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-19 01:17:31.426319 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-19 01:17:31.426326 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.426334 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.426341 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.426353 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.426361 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.426368 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.426375 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.426382 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.426402 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.426410 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.426417 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.426424 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.426431 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.426439 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.426446 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.426453 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.426460 | orchestrator | 2026-03-19 01:17:31.426467 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.426475 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.426482 | orchestrator | } 2026-03-19 01:17:31.426489 | orchestrator | 2026-03-19 01:17:31.426496 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.426503 | orchestrator | 2026-03-19 01:17:31.426511 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.426518 | orchestrator | + ip_address = "192.168.16.5" 2026-03-19 01:17:31.426525 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.426533 | orchestrator | } 2026-03-19 01:17:31.426540 | orchestrator | } 2026-03-19 01:17:31.426547 | orchestrator | 2026-03-19 01:17:31.426554 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-19 01:17:31.426561 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.426568 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.426576 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.426588 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.426623 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.426641 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.426654 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.426665 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.426677 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.426689 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.426701 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.426713 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.426725 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.426733 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.426740 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.426754 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.426761 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.426768 | orchestrator | 2026-03-19 01:17:31.426776 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.426783 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.426790 | orchestrator | } 2026-03-19 01:17:31.426797 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.426804 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.426811 | orchestrator | } 2026-03-19 01:17:31.426819 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.426826 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.426833 | orchestrator | } 2026-03-19 01:17:31.426840 | orchestrator | 2026-03-19 01:17:31.426847 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.426854 | orchestrator | 2026-03-19 01:17:31.426862 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.426869 | orchestrator | + ip_address = "192.168.16.10" 2026-03-19 01:17:31.426876 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.426883 | orchestrator | } 2026-03-19 01:17:31.426890 | orchestrator | } 2026-03-19 01:17:31.426897 | orchestrator | 2026-03-19 01:17:31.426904 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-19 01:17:31.426911 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.426920 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.426932 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.426944 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.426955 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.426966 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.426977 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.426988 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.427001 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.427014 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.427026 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.427038 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.427050 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.427058 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.427065 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.427072 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.427079 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.427087 | orchestrator | 2026-03-19 01:17:31.427094 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427101 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.427108 | orchestrator | } 2026-03-19 01:17:31.427115 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427123 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.427130 | orchestrator | } 2026-03-19 01:17:31.427137 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427144 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.427151 | orchestrator | } 2026-03-19 01:17:31.427158 | orchestrator | 2026-03-19 01:17:31.427165 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.427172 | orchestrator | 2026-03-19 01:17:31.427180 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.427187 | orchestrator | + ip_address = "192.168.16.11" 2026-03-19 01:17:31.427194 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.427201 | orchestrator | } 2026-03-19 01:17:31.427208 | orchestrator | } 2026-03-19 01:17:31.427215 | orchestrator | 2026-03-19 01:17:31.427222 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-19 01:17:31.427229 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.427236 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.427244 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.427251 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.427258 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.427271 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.427279 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.427286 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.427293 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.427306 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.427321 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.427328 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.427336 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.427343 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.427350 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.427357 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.427364 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.427371 | orchestrator | 2026-03-19 01:17:31.427379 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427386 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.427393 | orchestrator | } 2026-03-19 01:17:31.427400 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427407 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.427414 | orchestrator | } 2026-03-19 01:17:31.427421 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427428 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.427436 | orchestrator | } 2026-03-19 01:17:31.427443 | orchestrator | 2026-03-19 01:17:31.427450 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.427457 | orchestrator | 2026-03-19 01:17:31.427464 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.427471 | orchestrator | + ip_address = "192.168.16.12" 2026-03-19 01:17:31.427478 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.427485 | orchestrator | } 2026-03-19 01:17:31.427493 | orchestrator | } 2026-03-19 01:17:31.427500 | orchestrator | 2026-03-19 01:17:31.427507 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-19 01:17:31.427514 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.427521 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.427528 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.427536 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.427543 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.427550 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.427557 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.427564 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.427571 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.427582 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.427594 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.427626 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.427638 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.427650 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.427662 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.427675 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.427686 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.427697 | orchestrator | 2026-03-19 01:17:31.427705 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427712 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.427719 | orchestrator | } 2026-03-19 01:17:31.427727 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427734 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.427741 | orchestrator | } 2026-03-19 01:17:31.427749 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.427760 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.427772 | orchestrator | } 2026-03-19 01:17:31.427788 | orchestrator | 2026-03-19 01:17:31.427812 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.427824 | orchestrator | 2026-03-19 01:17:31.427836 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.427847 | orchestrator | + ip_address = "192.168.16.13" 2026-03-19 01:17:31.427858 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.427868 | orchestrator | } 2026-03-19 01:17:31.427880 | orchestrator | } 2026-03-19 01:17:31.427890 | orchestrator | 2026-03-19 01:17:31.427902 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-19 01:17:31.427914 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.427925 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.427937 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.427949 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.427961 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.427973 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.427985 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.427996 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.428008 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.428026 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.428039 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.428050 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.428061 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.428072 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.428083 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.428094 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.428104 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.428118 | orchestrator | 2026-03-19 01:17:31.428129 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428141 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.428153 | orchestrator | } 2026-03-19 01:17:31.428166 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428177 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.428189 | orchestrator | } 2026-03-19 01:17:31.428201 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428218 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.428231 | orchestrator | } 2026-03-19 01:17:31.428243 | orchestrator | 2026-03-19 01:17:31.428254 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.428266 | orchestrator | 2026-03-19 01:17:31.428278 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.428290 | orchestrator | + ip_address = "192.168.16.14" 2026-03-19 01:17:31.428303 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.428315 | orchestrator | } 2026-03-19 01:17:31.428327 | orchestrator | } 2026-03-19 01:17:31.428339 | orchestrator | 2026-03-19 01:17:31.428348 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-19 01:17:31.428356 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-19 01:17:31.428363 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.428370 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-19 01:17:31.428377 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-19 01:17:31.428385 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.428392 | orchestrator | + device_id = (known after apply) 2026-03-19 01:17:31.428399 | orchestrator | + device_owner = (known after apply) 2026-03-19 01:17:31.428415 | orchestrator | + dns_assignment = (known after apply) 2026-03-19 01:17:31.428423 | orchestrator | + dns_name = (known after apply) 2026-03-19 01:17:31.428430 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.428437 | orchestrator | + mac_address = (known after apply) 2026-03-19 01:17:31.428444 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.428452 | orchestrator | + port_security_enabled = (known after apply) 2026-03-19 01:17:31.428459 | orchestrator | + qos_policy_id = (known after apply) 2026-03-19 01:17:31.428480 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.428487 | orchestrator | + security_group_ids = (known after apply) 2026-03-19 01:17:31.428494 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.428501 | orchestrator | 2026-03-19 01:17:31.428509 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428516 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-19 01:17:31.428523 | orchestrator | } 2026-03-19 01:17:31.428530 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428537 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-19 01:17:31.428544 | orchestrator | } 2026-03-19 01:17:31.428551 | orchestrator | + allowed_address_pairs { 2026-03-19 01:17:31.428559 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-19 01:17:31.428566 | orchestrator | } 2026-03-19 01:17:31.428574 | orchestrator | 2026-03-19 01:17:31.428593 | orchestrator | + binding (known after apply) 2026-03-19 01:17:31.428627 | orchestrator | 2026-03-19 01:17:31.428639 | orchestrator | + fixed_ip { 2026-03-19 01:17:31.428651 | orchestrator | + ip_address = "192.168.16.15" 2026-03-19 01:17:31.428663 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.428674 | orchestrator | } 2026-03-19 01:17:31.428687 | orchestrator | } 2026-03-19 01:17:31.428700 | orchestrator | 2026-03-19 01:17:31.428711 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-19 01:17:31.428724 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-19 01:17:31.428731 | orchestrator | + force_destroy = false 2026-03-19 01:17:31.428738 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.428745 | orchestrator | + port_id = (known after apply) 2026-03-19 01:17:31.428753 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.428760 | orchestrator | + router_id = (known after apply) 2026-03-19 01:17:31.428767 | orchestrator | + subnet_id = (known after apply) 2026-03-19 01:17:31.428774 | orchestrator | } 2026-03-19 01:17:31.428781 | orchestrator | 2026-03-19 01:17:31.428788 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-19 01:17:31.428795 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-19 01:17:31.428803 | orchestrator | + admin_state_up = (known after apply) 2026-03-19 01:17:31.428810 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.428817 | orchestrator | + availability_zone_hints = [ 2026-03-19 01:17:31.428824 | orchestrator | + "nova", 2026-03-19 01:17:31.428831 | orchestrator | ] 2026-03-19 01:17:31.428839 | orchestrator | + distributed = (known after apply) 2026-03-19 01:17:31.428846 | orchestrator | + enable_snat = (known after apply) 2026-03-19 01:17:31.428853 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-19 01:17:31.428860 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-19 01:17:31.428868 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.428875 | orchestrator | + name = "testbed" 2026-03-19 01:17:31.428882 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.428889 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.428897 | orchestrator | 2026-03-19 01:17:31.428904 | orchestrator | + external_fixed_ip (known after apply) 2026-03-19 01:17:31.428911 | orchestrator | } 2026-03-19 01:17:31.428919 | orchestrator | 2026-03-19 01:17:31.428926 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-19 01:17:31.428935 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-19 01:17:31.428942 | orchestrator | + description = "ssh" 2026-03-19 01:17:31.428949 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.428956 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.428964 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.428971 | orchestrator | + port_range_max = 22 2026-03-19 01:17:31.428978 | orchestrator | + port_range_min = 22 2026-03-19 01:17:31.428985 | orchestrator | + protocol = "tcp" 2026-03-19 01:17:31.428992 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429007 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429014 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429021 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429029 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429036 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429043 | orchestrator | } 2026-03-19 01:17:31.429050 | orchestrator | 2026-03-19 01:17:31.429058 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-19 01:17:31.429065 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-19 01:17:31.429072 | orchestrator | + description = "wireguard" 2026-03-19 01:17:31.429080 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429087 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429094 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429101 | orchestrator | + port_range_max = 51820 2026-03-19 01:17:31.429108 | orchestrator | + port_range_min = 51820 2026-03-19 01:17:31.429116 | orchestrator | + protocol = "udp" 2026-03-19 01:17:31.429123 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429130 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429137 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429144 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429152 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429159 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429166 | orchestrator | } 2026-03-19 01:17:31.429173 | orchestrator | 2026-03-19 01:17:31.429180 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-19 01:17:31.429188 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-19 01:17:31.429195 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429202 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429210 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429217 | orchestrator | + protocol = "tcp" 2026-03-19 01:17:31.429231 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429239 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429246 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429253 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-19 01:17:31.429260 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429268 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429275 | orchestrator | } 2026-03-19 01:17:31.429282 | orchestrator | 2026-03-19 01:17:31.429290 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-19 01:17:31.429297 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-19 01:17:31.429304 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429311 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429319 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429326 | orchestrator | + protocol = "udp" 2026-03-19 01:17:31.429333 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429340 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429347 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429355 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-19 01:17:31.429362 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429369 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429376 | orchestrator | } 2026-03-19 01:17:31.429383 | orchestrator | 2026-03-19 01:17:31.429390 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-19 01:17:31.429405 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-19 01:17:31.429412 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429419 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429426 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429434 | orchestrator | + protocol = "icmp" 2026-03-19 01:17:31.429441 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429448 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429455 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429462 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429469 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429477 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429484 | orchestrator | } 2026-03-19 01:17:31.429491 | orchestrator | 2026-03-19 01:17:31.429498 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-19 01:17:31.429505 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-19 01:17:31.429513 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429520 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429527 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429534 | orchestrator | + protocol = "tcp" 2026-03-19 01:17:31.429542 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429549 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429561 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429568 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429577 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429589 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429624 | orchestrator | } 2026-03-19 01:17:31.429636 | orchestrator | 2026-03-19 01:17:31.429648 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-19 01:17:31.429659 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-19 01:17:31.429670 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429681 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429691 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429704 | orchestrator | + protocol = "udp" 2026-03-19 01:17:31.429717 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429728 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429742 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429749 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429756 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429764 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429771 | orchestrator | } 2026-03-19 01:17:31.429778 | orchestrator | 2026-03-19 01:17:31.429785 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-19 01:17:31.429792 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-19 01:17:31.429799 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429811 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429819 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429826 | orchestrator | + protocol = "icmp" 2026-03-19 01:17:31.429833 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429840 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429847 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429854 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429861 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429868 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429881 | orchestrator | } 2026-03-19 01:17:31.429889 | orchestrator | 2026-03-19 01:17:31.429896 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-19 01:17:31.429903 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-19 01:17:31.429911 | orchestrator | + description = "vrrp" 2026-03-19 01:17:31.429918 | orchestrator | + direction = "ingress" 2026-03-19 01:17:31.429925 | orchestrator | + ethertype = "IPv4" 2026-03-19 01:17:31.429932 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.429939 | orchestrator | + protocol = "112" 2026-03-19 01:17:31.429952 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.429960 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-19 01:17:31.429967 | orchestrator | + remote_group_id = (known after apply) 2026-03-19 01:17:31.429974 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-19 01:17:31.429981 | orchestrator | + security_group_id = (known after apply) 2026-03-19 01:17:31.429988 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.429996 | orchestrator | } 2026-03-19 01:17:31.430003 | orchestrator | 2026-03-19 01:17:31.430010 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-19 01:17:31.430041 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-19 01:17:31.430050 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.430057 | orchestrator | + description = "management security group" 2026-03-19 01:17:31.430064 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.430071 | orchestrator | + name = "testbed-management" 2026-03-19 01:17:31.430079 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.430086 | orchestrator | + stateful = (known after apply) 2026-03-19 01:17:31.430093 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.430100 | orchestrator | } 2026-03-19 01:17:31.430107 | orchestrator | 2026-03-19 01:17:31.430115 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-19 01:17:31.430122 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-19 01:17:31.430129 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.430137 | orchestrator | + description = "node security group" 2026-03-19 01:17:31.430144 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.430151 | orchestrator | + name = "testbed-node" 2026-03-19 01:17:31.430158 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.430165 | orchestrator | + stateful = (known after apply) 2026-03-19 01:17:31.430173 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.430180 | orchestrator | } 2026-03-19 01:17:31.430187 | orchestrator | 2026-03-19 01:17:31.430194 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-19 01:17:31.430201 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-19 01:17:31.430209 | orchestrator | + all_tags = (known after apply) 2026-03-19 01:17:31.430216 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-19 01:17:31.430223 | orchestrator | + dns_nameservers = [ 2026-03-19 01:17:31.430231 | orchestrator | + "8.8.8.8", 2026-03-19 01:17:31.430238 | orchestrator | + "9.9.9.9", 2026-03-19 01:17:31.430245 | orchestrator | ] 2026-03-19 01:17:31.430253 | orchestrator | + enable_dhcp = true 2026-03-19 01:17:31.430260 | orchestrator | + gateway_ip = (known after apply) 2026-03-19 01:17:31.430268 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.430275 | orchestrator | + ip_version = 4 2026-03-19 01:17:31.430282 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-19 01:17:31.430289 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-19 01:17:31.430296 | orchestrator | + name = "subnet-testbed-management" 2026-03-19 01:17:31.430304 | orchestrator | + network_id = (known after apply) 2026-03-19 01:17:31.430311 | orchestrator | + no_gateway = false 2026-03-19 01:17:31.430318 | orchestrator | + region = (known after apply) 2026-03-19 01:17:31.430326 | orchestrator | + service_types = (known after apply) 2026-03-19 01:17:31.430339 | orchestrator | + tenant_id = (known after apply) 2026-03-19 01:17:31.430346 | orchestrator | 2026-03-19 01:17:31.430353 | orchestrator | + allocation_pool { 2026-03-19 01:17:31.430360 | orchestrator | + end = "192.168.31.250" 2026-03-19 01:17:31.430368 | orchestrator | + start = "192.168.31.200" 2026-03-19 01:17:31.430375 | orchestrator | } 2026-03-19 01:17:31.430382 | orchestrator | } 2026-03-19 01:17:31.430390 | orchestrator | 2026-03-19 01:17:31.430397 | orchestrator | # terraform_data.image will be created 2026-03-19 01:17:31.430404 | orchestrator | + resource "terraform_data" "image" { 2026-03-19 01:17:31.430411 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.430418 | orchestrator | + input = "Ubuntu 24.04" 2026-03-19 01:17:31.430426 | orchestrator | + output = (known after apply) 2026-03-19 01:17:31.430433 | orchestrator | } 2026-03-19 01:17:31.430440 | orchestrator | 2026-03-19 01:17:31.430447 | orchestrator | # terraform_data.image_node will be created 2026-03-19 01:17:31.430454 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-19 01:17:31.430462 | orchestrator | + id = (known after apply) 2026-03-19 01:17:31.430469 | orchestrator | + input = "Ubuntu 24.04" 2026-03-19 01:17:31.430476 | orchestrator | + output = (known after apply) 2026-03-19 01:17:31.430483 | orchestrator | } 2026-03-19 01:17:31.430490 | orchestrator | 2026-03-19 01:17:31.430497 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-19 01:17:31.430505 | orchestrator | 2026-03-19 01:17:31.430512 | orchestrator | Changes to Outputs: 2026-03-19 01:17:31.430519 | orchestrator | + manager_address = (sensitive value) 2026-03-19 01:17:31.430526 | orchestrator | + private_key = (sensitive value) 2026-03-19 01:17:31.674189 | orchestrator | terraform_data.image: Creating... 2026-03-19 01:17:31.674299 | orchestrator | terraform_data.image_node: Creating... 2026-03-19 01:17:31.674323 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=bec832f3-b61d-35b8-f00a-98ea848bd66b] 2026-03-19 01:17:31.674341 | orchestrator | terraform_data.image: Creation complete after 0s [id=9da0add7-5565-3162-c19b-e0e7a4e97f1b] 2026-03-19 01:17:31.704277 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-19 01:17:31.704360 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-19 01:17:31.713378 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-19 01:17:31.718122 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-19 01:17:31.718194 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-19 01:17:31.718205 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-19 01:17:31.718214 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-19 01:17:31.718231 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-19 01:17:31.742110 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-19 01:17:31.742192 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-19 01:17:32.173788 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-19 01:17:32.176874 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-19 01:17:32.180478 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-19 01:17:32.181371 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-19 01:17:32.208487 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-19 01:17:32.215452 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-19 01:17:32.723415 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=601d4c21-ff18-4a36-9a51-ec09597966ed] 2026-03-19 01:17:32.737114 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-19 01:17:35.328322 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=39b473cc-c557-499b-ae61-29aaa57bd422] 2026-03-19 01:17:35.335073 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-19 01:17:35.353969 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=740ce1a0-d0ce-4991-9b3f-fd403e7e525e] 2026-03-19 01:17:35.370382 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-19 01:17:35.372577 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=159498f1-f6fb-4526-96c5-103a28738ba8] 2026-03-19 01:17:35.374521 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=6ca08e20-d893-4525-9d75-036a26f1ab97] 2026-03-19 01:17:35.383706 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=77d1d0bc-0a63-49dd-b34a-7227460faeb5] 2026-03-19 01:17:35.387147 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-19 01:17:35.391670 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-19 01:17:35.394874 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-19 01:17:35.408833 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=882bbde8-c2a7-4908-ad99-b7a0a7d616d1] 2026-03-19 01:17:35.415310 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-19 01:17:35.441301 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e6be47e7-14ad-42f7-995f-7ba3ed74c5ff] 2026-03-19 01:17:35.443171 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=57dec018-1465-4558-908d-748a1c147c6d] 2026-03-19 01:17:35.447558 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-19 01:17:35.449363 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-19 01:17:35.464235 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=91fa61f2-01b9-4964-86cf-d0da46381906] 2026-03-19 01:17:35.469346 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-19 01:17:35.637124 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=3a6f5db0984fb18ee278bc2e8357dbe006d76ac5] 2026-03-19 01:17:35.637372 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=3ae739c96eedc879485a511a8211fee23917a322] 2026-03-19 01:17:36.088771 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a587d6ca-13b1-4767-8b95-b15cf08fcf75] 2026-03-19 01:17:36.343468 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=4e1c95ce-ed36-48e2-9fcf-cddada734987] 2026-03-19 01:17:36.349368 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-19 01:17:38.722291 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1] 2026-03-19 01:17:38.769563 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=29171f1c-6cc3-40cd-9178-0fa38eeda372] 2026-03-19 01:17:38.790945 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=8266a944-9a5f-4e36-bd18-89fd67130cb1] 2026-03-19 01:17:38.822106 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=dea79e11-ab75-414a-8bf6-773f9ffc0e77] 2026-03-19 01:17:38.834700 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=fd4a185e-e644-4224-9e55-45e03a3199c2] 2026-03-19 01:17:38.853246 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=3b3a0fcd-108c-44bd-8b62-9d8276f3656e] 2026-03-19 01:17:39.446129 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=fdfb63df-d1b2-40d4-a7fe-909a863c185f] 2026-03-19 01:17:40.432286 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-19 01:17:40.432355 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-19 01:17:40.432367 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-19 01:17:40.432382 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=7113d25e-e728-47bb-a222-166900c6b0ad] 2026-03-19 01:17:40.432404 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-19 01:17:40.432422 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-19 01:17:40.432434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-19 01:17:40.432479 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-19 01:17:40.432492 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-19 01:17:40.432504 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-19 01:17:40.432518 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=6538ddad-c2c8-4844-8391-6114a9a342e5] 2026-03-19 01:17:40.432531 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-19 01:17:40.432545 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-19 01:17:40.432559 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-19 01:17:40.432572 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=5a5aecd2-9fd7-43fc-ada5-9198c7fca540] 2026-03-19 01:17:40.432586 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-19 01:17:40.432671 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5a0bd58c-5486-4941-9e73-ace876a8b9dc] 2026-03-19 01:17:40.432680 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-19 01:17:40.432688 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ef5ae111-a274-4bc5-9c57-2c378a315c94] 2026-03-19 01:17:40.432696 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-19 01:17:40.432704 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=2983e37f-ff18-4397-a528-d29841633044] 2026-03-19 01:17:40.432712 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-19 01:17:40.432720 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=8005c211-1977-4346-9215-8c02c645a561] 2026-03-19 01:17:40.432728 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-19 01:17:40.432736 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=39a12f04-eb11-463b-945b-6422296bef25] 2026-03-19 01:17:40.432744 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-19 01:17:40.432751 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=ec4ec8e3-4b51-48ff-b813-6fb79c69c837] 2026-03-19 01:17:40.432759 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-19 01:17:40.432767 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=4a6b2030-4445-4405-83af-a00ae406896a] 2026-03-19 01:17:40.432776 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=87e02d96-f561-429b-839e-a01d7bbfdcd5] 2026-03-19 01:17:40.571192 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=64446432-0e9a-41a9-b8e8-d611f4571805] 2026-03-19 01:17:40.624047 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9ce12750-9ab2-4f22-8028-02cf815edc36] 2026-03-19 01:17:40.761253 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=b7369bd3-f010-48be-9641-cba4dd81a368] 2026-03-19 01:17:40.915848 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=f5265f94-087e-496e-9a7a-c0112583624c] 2026-03-19 01:17:41.068195 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9f1ad232-b8a0-4e3a-8595-52fc09da3bcf] 2026-03-19 01:17:41.126513 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=25b4c106-9819-40e7-bd9d-2cc07c002b6a] 2026-03-19 01:17:41.128039 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=85875a76-0955-4a97-ba0e-17a0e71b656f] 2026-03-19 01:17:41.595619 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=20273664-9460-470d-857b-a6a245c7b40e] 2026-03-19 01:17:41.612747 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-19 01:17:41.622406 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-19 01:17:41.633958 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-19 01:17:41.634188 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-19 01:17:41.643524 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-19 01:17:41.645065 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-19 01:17:41.646180 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-19 01:17:42.970296 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=3bd668a1-fd59-42c0-8d6f-dd039af6d7e9] 2026-03-19 01:17:42.979285 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-19 01:17:42.981784 | orchestrator | local_file.inventory: Creating... 2026-03-19 01:17:42.984569 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-19 01:17:42.988921 | orchestrator | local_file.inventory: Creation complete after 0s [id=3721bfb2135569ca17fff967112988d0495addfb] 2026-03-19 01:17:42.990169 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7ed6f6e236420e381547fee786a2fd2390cce72e] 2026-03-19 01:17:44.303761 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=3bd668a1-fd59-42c0-8d6f-dd039af6d7e9] 2026-03-19 01:17:51.625012 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-19 01:17:51.634303 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-19 01:17:51.635490 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-19 01:17:51.644763 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-19 01:17:51.649131 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-19 01:17:51.649275 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-19 01:18:01.625166 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-19 01:18:01.635030 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-19 01:18:01.636194 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-19 01:18:01.645488 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-19 01:18:01.649482 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-19 01:18:01.649571 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-19 01:18:01.921894 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=159b00ae-18be-48c9-b795-02a1fa8c11aa] 2026-03-19 01:18:01.973346 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=8c06e95b-1cfc-4942-a2fa-4d681a2bcc05] 2026-03-19 01:18:02.101550 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=346dc570-82e7-4b27-87b8-79f3428f8a49] 2026-03-19 01:18:11.648434 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-19 01:18:11.649603 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-19 01:18:11.649688 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-19 01:18:12.178094 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=4a80e262-593c-4875-b1b4-3ea93f755c62] 2026-03-19 01:18:12.210112 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=67f846b6-94ca-48d4-bb23-80cce423c046] 2026-03-19 01:18:12.247153 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=b14fb3e8-ca0f-413a-aeb4-b447d22d2143] 2026-03-19 01:18:12.266434 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-19 01:18:12.270661 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-19 01:18:12.278961 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-19 01:18:12.279808 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7248405313592513556] 2026-03-19 01:18:12.281638 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-19 01:18:12.282290 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-19 01:18:12.282593 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-19 01:18:12.284852 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-19 01:18:12.287911 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-19 01:18:12.294105 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-19 01:18:12.310110 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-19 01:18:12.313191 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-19 01:18:15.655094 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=8c06e95b-1cfc-4942-a2fa-4d681a2bcc05/159498f1-f6fb-4526-96c5-103a28738ba8] 2026-03-19 01:18:15.669260 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=159b00ae-18be-48c9-b795-02a1fa8c11aa/39b473cc-c557-499b-ae61-29aaa57bd422] 2026-03-19 01:18:15.690372 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=b14fb3e8-ca0f-413a-aeb4-b447d22d2143/91fa61f2-01b9-4964-86cf-d0da46381906] 2026-03-19 01:18:15.699888 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=8c06e95b-1cfc-4942-a2fa-4d681a2bcc05/740ce1a0-d0ce-4991-9b3f-fd403e7e525e] 2026-03-19 01:18:15.725174 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=b14fb3e8-ca0f-413a-aeb4-b447d22d2143/e6be47e7-14ad-42f7-995f-7ba3ed74c5ff] 2026-03-19 01:18:15.853057 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=159b00ae-18be-48c9-b795-02a1fa8c11aa/882bbde8-c2a7-4908-ad99-b7a0a7d616d1] 2026-03-19 01:18:21.808393 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=8c06e95b-1cfc-4942-a2fa-4d681a2bcc05/77d1d0bc-0a63-49dd-b34a-7227460faeb5] 2026-03-19 01:18:21.841305 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=b14fb3e8-ca0f-413a-aeb4-b447d22d2143/6ca08e20-d893-4525-9d75-036a26f1ab97] 2026-03-19 01:18:22.122401 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=159b00ae-18be-48c9-b795-02a1fa8c11aa/57dec018-1465-4558-908d-748a1c147c6d] 2026-03-19 01:18:22.316048 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-19 01:18:32.316768 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-19 01:18:32.580569 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=446a0175-8a3e-4aad-8637-67f02684d3ff] 2026-03-19 01:18:32.594610 | orchestrator | 2026-03-19 01:18:32.594693 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-19 01:18:32.594704 | orchestrator | 2026-03-19 01:18:32.594711 | orchestrator | Outputs: 2026-03-19 01:18:32.594719 | orchestrator | 2026-03-19 01:18:32.594725 | orchestrator | manager_address = 2026-03-19 01:18:32.594732 | orchestrator | private_key = 2026-03-19 01:18:32.976748 | orchestrator | ok: Runtime: 0:01:06.331059 2026-03-19 01:18:33.009290 | 2026-03-19 01:18:33.009420 | TASK [Fetch manager address] 2026-03-19 01:18:33.468745 | orchestrator | ok 2026-03-19 01:18:33.477956 | 2026-03-19 01:18:33.478147 | TASK [Set manager_host address] 2026-03-19 01:18:33.559341 | orchestrator | ok 2026-03-19 01:18:33.569164 | 2026-03-19 01:18:33.569291 | LOOP [Update ansible collections] 2026-03-19 01:18:34.601661 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 01:18:34.602233 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-19 01:18:34.602466 | orchestrator | Starting galaxy collection install process 2026-03-19 01:18:34.602556 | orchestrator | Process install dependency map 2026-03-19 01:18:34.602623 | orchestrator | Starting collection install process 2026-03-19 01:18:34.602685 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-03-19 01:18:34.602755 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-03-19 01:18:34.602828 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-19 01:18:34.603036 | orchestrator | ok: Item: commons Runtime: 0:00:00.689617 2026-03-19 01:18:35.651051 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-19 01:18:35.651281 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 01:18:35.651360 | orchestrator | Starting galaxy collection install process 2026-03-19 01:18:35.651410 | orchestrator | Process install dependency map 2026-03-19 01:18:35.651447 | orchestrator | Starting collection install process 2026-03-19 01:18:35.651481 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-03-19 01:18:35.651516 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-03-19 01:18:35.651548 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-19 01:18:35.651604 | orchestrator | ok: Item: services Runtime: 0:00:00.765888 2026-03-19 01:18:35.673525 | 2026-03-19 01:18:35.673681 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-19 01:18:46.259279 | orchestrator | ok 2026-03-19 01:18:46.273142 | 2026-03-19 01:18:46.273293 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-19 01:19:46.320383 | orchestrator | ok 2026-03-19 01:19:46.335317 | 2026-03-19 01:19:46.335520 | TASK [Fetch manager ssh hostkey] 2026-03-19 01:19:47.916462 | orchestrator | Output suppressed because no_log was given 2026-03-19 01:19:47.931087 | 2026-03-19 01:19:47.931252 | TASK [Get ssh keypair from terraform environment] 2026-03-19 01:19:48.467049 | orchestrator | ok: Runtime: 0:00:00.007928 2026-03-19 01:19:48.483355 | 2026-03-19 01:19:48.483522 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-19 01:19:48.521290 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-19 01:19:48.530411 | 2026-03-19 01:19:48.530534 | TASK [Run manager part 0] 2026-03-19 01:19:49.674044 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 01:19:49.730000 | orchestrator | 2026-03-19 01:19:49.730111 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-19 01:19:49.730124 | orchestrator | 2026-03-19 01:19:49.730149 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-19 01:19:51.571814 | orchestrator | ok: [testbed-manager] 2026-03-19 01:19:51.571863 | orchestrator | 2026-03-19 01:19:51.571889 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-19 01:19:51.571901 | orchestrator | 2026-03-19 01:19:51.571913 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:19:53.506061 | orchestrator | ok: [testbed-manager] 2026-03-19 01:19:53.506097 | orchestrator | 2026-03-19 01:19:53.506104 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-19 01:19:54.135979 | orchestrator | ok: [testbed-manager] 2026-03-19 01:19:54.136035 | orchestrator | 2026-03-19 01:19:54.136043 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-19 01:19:54.187102 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.187148 | orchestrator | 2026-03-19 01:19:54.187157 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-19 01:19:54.219730 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.219798 | orchestrator | 2026-03-19 01:19:54.219806 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-19 01:19:54.254195 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.254270 | orchestrator | 2026-03-19 01:19:54.254280 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-19 01:19:54.288444 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.288505 | orchestrator | 2026-03-19 01:19:54.288536 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-19 01:19:54.320700 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.320759 | orchestrator | 2026-03-19 01:19:54.320771 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-19 01:19:54.354108 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.354160 | orchestrator | 2026-03-19 01:19:54.354168 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-19 01:19:54.389399 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:19:54.389450 | orchestrator | 2026-03-19 01:19:54.389458 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-19 01:19:55.109983 | orchestrator | changed: [testbed-manager] 2026-03-19 01:19:55.110105 | orchestrator | 2026-03-19 01:19:55.110122 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-19 01:22:32.675669 | orchestrator | changed: [testbed-manager] 2026-03-19 01:22:32.675754 | orchestrator | 2026-03-19 01:22:32.675769 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-19 01:24:19.026340 | orchestrator | changed: [testbed-manager] 2026-03-19 01:24:19.026422 | orchestrator | 2026-03-19 01:24:19.026433 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-19 01:24:42.868362 | orchestrator | changed: [testbed-manager] 2026-03-19 01:24:42.868460 | orchestrator | 2026-03-19 01:24:42.868477 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-19 01:24:51.340129 | orchestrator | changed: [testbed-manager] 2026-03-19 01:24:51.340233 | orchestrator | 2026-03-19 01:24:51.340249 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-19 01:24:51.382876 | orchestrator | ok: [testbed-manager] 2026-03-19 01:24:51.382925 | orchestrator | 2026-03-19 01:24:51.382933 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-19 01:24:52.192410 | orchestrator | ok: [testbed-manager] 2026-03-19 01:24:52.192475 | orchestrator | 2026-03-19 01:24:52.192499 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-19 01:24:52.945875 | orchestrator | changed: [testbed-manager] 2026-03-19 01:24:52.945943 | orchestrator | 2026-03-19 01:24:52.945954 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-19 01:24:59.046823 | orchestrator | changed: [testbed-manager] 2026-03-19 01:24:59.046920 | orchestrator | 2026-03-19 01:24:59.046998 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-19 01:25:04.639043 | orchestrator | changed: [testbed-manager] 2026-03-19 01:25:04.639117 | orchestrator | 2026-03-19 01:25:04.639127 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-19 01:25:07.286157 | orchestrator | changed: [testbed-manager] 2026-03-19 01:25:07.286228 | orchestrator | 2026-03-19 01:25:07.286237 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-19 01:25:08.964158 | orchestrator | changed: [testbed-manager] 2026-03-19 01:25:08.964257 | orchestrator | 2026-03-19 01:25:08.964275 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-19 01:25:10.063414 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-19 01:25:10.063529 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-19 01:25:10.063538 | orchestrator | 2026-03-19 01:25:10.063544 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-19 01:25:10.104221 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-19 01:25:10.104303 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-19 01:25:10.104316 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-19 01:25:10.104329 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-19 01:25:13.323871 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-19 01:25:13.323923 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-19 01:25:13.323931 | orchestrator | 2026-03-19 01:25:13.323939 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-19 01:25:13.838519 | orchestrator | changed: [testbed-manager] 2026-03-19 01:25:13.838595 | orchestrator | 2026-03-19 01:25:13.838606 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-19 01:26:33.809040 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-19 01:26:33.809203 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-19 01:26:33.809218 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-19 01:26:33.809224 | orchestrator | 2026-03-19 01:26:33.809230 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-19 01:26:36.072436 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-19 01:26:36.072569 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-19 01:26:36.072587 | orchestrator | 2026-03-19 01:26:36.072600 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-19 01:26:36.072613 | orchestrator | 2026-03-19 01:26:36.072625 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:26:37.476839 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:37.476877 | orchestrator | 2026-03-19 01:26:37.476884 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-19 01:26:37.520004 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:37.520044 | orchestrator | 2026-03-19 01:26:37.520051 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-19 01:26:37.605686 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:37.605738 | orchestrator | 2026-03-19 01:26:37.605748 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-19 01:26:38.447951 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:38.447992 | orchestrator | 2026-03-19 01:26:38.447999 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-19 01:26:39.146996 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:39.147034 | orchestrator | 2026-03-19 01:26:39.147040 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-19 01:26:40.494700 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-19 01:26:40.494777 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-19 01:26:40.494784 | orchestrator | 2026-03-19 01:26:40.494805 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-19 01:26:41.877820 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:41.877898 | orchestrator | 2026-03-19 01:26:41.877906 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-19 01:26:43.633577 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:26:43.633706 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-19 01:26:43.633732 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:26:43.633752 | orchestrator | 2026-03-19 01:26:43.633774 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-19 01:26:43.694344 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:43.694496 | orchestrator | 2026-03-19 01:26:43.694525 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-19 01:26:43.769066 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:43.769161 | orchestrator | 2026-03-19 01:26:43.769180 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-19 01:26:44.332092 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:44.332216 | orchestrator | 2026-03-19 01:26:44.332232 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-19 01:26:44.405196 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:44.405293 | orchestrator | 2026-03-19 01:26:44.405311 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-19 01:26:45.242350 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:26:45.242506 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:45.242536 | orchestrator | 2026-03-19 01:26:45.242551 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-19 01:26:45.276749 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:45.276828 | orchestrator | 2026-03-19 01:26:45.276840 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-19 01:26:45.314223 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:45.314338 | orchestrator | 2026-03-19 01:26:45.314363 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-19 01:26:45.342145 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:45.342206 | orchestrator | 2026-03-19 01:26:45.342214 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-19 01:26:45.419487 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:45.419549 | orchestrator | 2026-03-19 01:26:45.419556 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-19 01:26:46.086161 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:46.086261 | orchestrator | 2026-03-19 01:26:46.086279 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-19 01:26:46.086293 | orchestrator | 2026-03-19 01:26:46.086305 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:26:47.388922 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:47.388982 | orchestrator | 2026-03-19 01:26:47.388989 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-19 01:26:48.323126 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:48.323199 | orchestrator | 2026-03-19 01:26:48.323210 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:26:48.323220 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-19 01:26:48.323228 | orchestrator | 2026-03-19 01:26:48.823453 | orchestrator | ok: Runtime: 0:06:59.586089 2026-03-19 01:26:48.842452 | 2026-03-19 01:26:48.842608 | TASK [Point out that the log in on the manager is now possible] 2026-03-19 01:26:48.892206 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-19 01:26:48.902800 | 2026-03-19 01:26:48.902988 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-19 01:26:48.952095 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-19 01:26:48.961861 | 2026-03-19 01:26:48.962051 | TASK [Run manager part 1 + 2] 2026-03-19 01:26:49.917240 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-19 01:26:49.985519 | orchestrator | 2026-03-19 01:26:49.985639 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-19 01:26:49.985667 | orchestrator | 2026-03-19 01:26:49.985701 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:26:52.959103 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:52.959165 | orchestrator | 2026-03-19 01:26:52.959190 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-19 01:26:52.996019 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:52.996091 | orchestrator | 2026-03-19 01:26:52.996106 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-19 01:26:53.046604 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:53.046725 | orchestrator | 2026-03-19 01:26:53.046758 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 01:26:53.101034 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:53.101108 | orchestrator | 2026-03-19 01:26:53.101122 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 01:26:53.178871 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:53.178937 | orchestrator | 2026-03-19 01:26:53.178946 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 01:26:53.243006 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:53.243072 | orchestrator | 2026-03-19 01:26:53.243084 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 01:26:53.286147 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-19 01:26:53.286202 | orchestrator | 2026-03-19 01:26:53.286208 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 01:26:54.017366 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:54.017472 | orchestrator | 2026-03-19 01:26:54.017483 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 01:26:54.067136 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:26:54.067317 | orchestrator | 2026-03-19 01:26:54.067331 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 01:26:55.436569 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:55.436627 | orchestrator | 2026-03-19 01:26:55.436635 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 01:26:55.995142 | orchestrator | ok: [testbed-manager] 2026-03-19 01:26:55.995206 | orchestrator | 2026-03-19 01:26:55.995215 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 01:26:57.070070 | orchestrator | changed: [testbed-manager] 2026-03-19 01:26:57.070129 | orchestrator | 2026-03-19 01:26:57.070142 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 01:27:11.849136 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:11.849894 | orchestrator | 2026-03-19 01:27:11.849930 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-19 01:27:12.501930 | orchestrator | ok: [testbed-manager] 2026-03-19 01:27:12.501989 | orchestrator | 2026-03-19 01:27:12.502002 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-19 01:27:12.549816 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:27:12.549866 | orchestrator | 2026-03-19 01:27:12.549875 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-19 01:27:13.488642 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:13.488721 | orchestrator | 2026-03-19 01:27:13.488748 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-19 01:27:14.411838 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:14.411900 | orchestrator | 2026-03-19 01:27:14.411914 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-19 01:27:14.954174 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:14.954246 | orchestrator | 2026-03-19 01:27:14.954268 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-19 01:27:15.003008 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-19 01:27:15.003133 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-19 01:27:15.003148 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-19 01:27:15.003157 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-19 01:27:17.145598 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:17.145665 | orchestrator | 2026-03-19 01:27:17.145673 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-19 01:27:25.699017 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-19 01:27:25.699112 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-19 01:27:25.699126 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-19 01:27:25.699134 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-19 01:27:25.699147 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-19 01:27:25.699155 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-19 01:27:25.699162 | orchestrator | 2026-03-19 01:27:25.699169 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-19 01:27:26.675665 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:26.675749 | orchestrator | 2026-03-19 01:27:26.675761 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-19 01:27:26.714369 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:27:26.714502 | orchestrator | 2026-03-19 01:27:26.714515 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-19 01:27:29.743870 | orchestrator | changed: [testbed-manager] 2026-03-19 01:27:29.743968 | orchestrator | 2026-03-19 01:27:29.743981 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-19 01:27:29.780523 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:27:29.780593 | orchestrator | 2026-03-19 01:27:29.780604 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-19 01:29:07.037161 | orchestrator | changed: [testbed-manager] 2026-03-19 01:29:07.037251 | orchestrator | 2026-03-19 01:29:07.037265 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 01:29:08.214644 | orchestrator | ok: [testbed-manager] 2026-03-19 01:29:08.214756 | orchestrator | 2026-03-19 01:29:08.214774 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:29:08.214788 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-19 01:29:08.214800 | orchestrator | 2026-03-19 01:29:08.598939 | orchestrator | ok: Runtime: 0:02:19.040668 2026-03-19 01:29:08.617495 | 2026-03-19 01:29:08.617674 | TASK [Reboot manager] 2026-03-19 01:29:10.156344 | orchestrator | ok: Runtime: 0:00:00.970022 2026-03-19 01:29:10.173551 | 2026-03-19 01:29:10.173714 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-19 01:29:25.058213 | orchestrator | ok 2026-03-19 01:29:25.069100 | 2026-03-19 01:29:25.069268 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-19 01:30:25.111017 | orchestrator | ok 2026-03-19 01:30:25.120862 | 2026-03-19 01:30:25.121050 | TASK [Deploy manager + bootstrap nodes] 2026-03-19 01:30:27.576844 | orchestrator | 2026-03-19 01:30:27.576991 | orchestrator | # DEPLOY MANAGER 2026-03-19 01:30:27.577001 | orchestrator | 2026-03-19 01:30:27.577006 | orchestrator | + set -e 2026-03-19 01:30:27.577011 | orchestrator | + echo 2026-03-19 01:30:27.577018 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-19 01:30:27.577025 | orchestrator | + echo 2026-03-19 01:30:27.577049 | orchestrator | + cat /opt/manager-vars.sh 2026-03-19 01:30:27.580552 | orchestrator | export NUMBER_OF_NODES=6 2026-03-19 01:30:27.580619 | orchestrator | 2026-03-19 01:30:27.580629 | orchestrator | export CEPH_VERSION=reef 2026-03-19 01:30:27.580640 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-19 01:30:27.580649 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-19 01:30:27.580667 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-19 01:30:27.580674 | orchestrator | 2026-03-19 01:30:27.580684 | orchestrator | export ARA=false 2026-03-19 01:30:27.580692 | orchestrator | export DEPLOY_MODE=manager 2026-03-19 01:30:27.580702 | orchestrator | export TEMPEST=false 2026-03-19 01:30:27.580709 | orchestrator | export IS_ZUUL=true 2026-03-19 01:30:27.580715 | orchestrator | 2026-03-19 01:30:27.580727 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:30:27.580734 | orchestrator | export EXTERNAL_API=false 2026-03-19 01:30:27.580740 | orchestrator | 2026-03-19 01:30:27.580746 | orchestrator | export IMAGE_USER=ubuntu 2026-03-19 01:30:27.580756 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-19 01:30:27.580762 | orchestrator | 2026-03-19 01:30:27.580769 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-19 01:30:27.580782 | orchestrator | 2026-03-19 01:30:27.580788 | orchestrator | + echo 2026-03-19 01:30:27.580796 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 01:30:27.581346 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 01:30:27.581381 | orchestrator | ++ INTERACTIVE=false 2026-03-19 01:30:27.581389 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 01:30:27.581395 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 01:30:27.581601 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 01:30:27.581614 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 01:30:27.581621 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 01:30:27.581628 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 01:30:27.581634 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 01:30:27.581640 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 01:30:27.581647 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 01:30:27.581656 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 01:30:27.581662 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 01:30:27.581668 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 01:30:27.581688 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 01:30:27.581694 | orchestrator | ++ export ARA=false 2026-03-19 01:30:27.581701 | orchestrator | ++ ARA=false 2026-03-19 01:30:27.581790 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 01:30:27.581797 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 01:30:27.581801 | orchestrator | ++ export TEMPEST=false 2026-03-19 01:30:27.581807 | orchestrator | ++ TEMPEST=false 2026-03-19 01:30:27.581814 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 01:30:27.581820 | orchestrator | ++ IS_ZUUL=true 2026-03-19 01:30:27.581826 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:30:27.581832 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:30:27.581841 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 01:30:27.581847 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 01:30:27.581853 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 01:30:27.581860 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 01:30:27.581866 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 01:30:27.581871 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 01:30:27.581875 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 01:30:27.581879 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 01:30:27.581883 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-19 01:30:27.648461 | orchestrator | + docker version 2026-03-19 01:30:27.760753 | orchestrator | Client: Docker Engine - Community 2026-03-19 01:30:27.760854 | orchestrator | Version: 27.5.1 2026-03-19 01:30:27.760868 | orchestrator | API version: 1.47 2026-03-19 01:30:27.760878 | orchestrator | Go version: go1.22.11 2026-03-19 01:30:27.760887 | orchestrator | Git commit: 9f9e405 2026-03-19 01:30:27.760896 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-19 01:30:27.760906 | orchestrator | OS/Arch: linux/amd64 2026-03-19 01:30:27.760915 | orchestrator | Context: default 2026-03-19 01:30:27.760924 | orchestrator | 2026-03-19 01:30:27.760933 | orchestrator | Server: Docker Engine - Community 2026-03-19 01:30:27.760942 | orchestrator | Engine: 2026-03-19 01:30:27.760952 | orchestrator | Version: 27.5.1 2026-03-19 01:30:27.760961 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-19 01:30:27.761004 | orchestrator | Go version: go1.22.11 2026-03-19 01:30:27.761013 | orchestrator | Git commit: 4c9b3b0 2026-03-19 01:30:27.761023 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-19 01:30:27.761031 | orchestrator | OS/Arch: linux/amd64 2026-03-19 01:30:27.761040 | orchestrator | Experimental: false 2026-03-19 01:30:27.761049 | orchestrator | containerd: 2026-03-19 01:30:27.761058 | orchestrator | Version: v2.2.2 2026-03-19 01:30:27.761067 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-19 01:30:27.761076 | orchestrator | runc: 2026-03-19 01:30:27.761085 | orchestrator | Version: 1.3.4 2026-03-19 01:30:27.761094 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-19 01:30:27.761103 | orchestrator | docker-init: 2026-03-19 01:30:27.761112 | orchestrator | Version: 0.19.0 2026-03-19 01:30:27.761121 | orchestrator | GitCommit: de40ad0 2026-03-19 01:30:27.763247 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-19 01:30:27.770617 | orchestrator | + set -e 2026-03-19 01:30:27.770694 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 01:30:27.770712 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 01:30:27.770726 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 01:30:27.770741 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 01:30:27.770755 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 01:30:27.770766 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 01:30:27.770776 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 01:30:27.770784 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 01:30:27.770793 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 01:30:27.770802 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 01:30:27.770811 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 01:30:27.770819 | orchestrator | ++ export ARA=false 2026-03-19 01:30:27.770829 | orchestrator | ++ ARA=false 2026-03-19 01:30:27.770837 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 01:30:27.770846 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 01:30:27.770855 | orchestrator | ++ export TEMPEST=false 2026-03-19 01:30:27.770863 | orchestrator | ++ TEMPEST=false 2026-03-19 01:30:27.770872 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 01:30:27.770880 | orchestrator | ++ IS_ZUUL=true 2026-03-19 01:30:27.770889 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:30:27.770898 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:30:27.770906 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 01:30:27.770915 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 01:30:27.770923 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 01:30:27.770932 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 01:30:27.770941 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 01:30:27.770949 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 01:30:27.770958 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 01:30:27.770966 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 01:30:27.770975 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 01:30:27.770983 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 01:30:27.770992 | orchestrator | ++ INTERACTIVE=false 2026-03-19 01:30:27.771000 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 01:30:27.771013 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 01:30:27.771022 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-19 01:30:27.771031 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-19 01:30:27.777295 | orchestrator | + set -e 2026-03-19 01:30:27.777392 | orchestrator | + VERSION=9.5.0 2026-03-19 01:30:27.777415 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-19 01:30:27.783814 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-19 01:30:27.783872 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-19 01:30:27.788686 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-19 01:30:27.792093 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-19 01:30:27.800911 | orchestrator | /opt/configuration ~ 2026-03-19 01:30:27.800979 | orchestrator | + set -e 2026-03-19 01:30:27.800992 | orchestrator | + pushd /opt/configuration 2026-03-19 01:30:27.801004 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 01:30:27.803519 | orchestrator | + source /opt/venv/bin/activate 2026-03-19 01:30:27.805109 | orchestrator | ++ deactivate nondestructive 2026-03-19 01:30:27.805175 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:27.805202 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:27.805262 | orchestrator | ++ hash -r 2026-03-19 01:30:27.805287 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:27.805304 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-19 01:30:27.805321 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-19 01:30:27.805340 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-19 01:30:27.805392 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-19 01:30:27.805409 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-19 01:30:27.805426 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-19 01:30:27.805445 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-19 01:30:27.805477 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:30:27.805496 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:30:27.805515 | orchestrator | ++ export PATH 2026-03-19 01:30:27.805534 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:27.805554 | orchestrator | ++ '[' -z '' ']' 2026-03-19 01:30:27.805565 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-19 01:30:27.805576 | orchestrator | ++ PS1='(venv) ' 2026-03-19 01:30:27.805587 | orchestrator | ++ export PS1 2026-03-19 01:30:27.805598 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-19 01:30:27.805608 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-19 01:30:27.805619 | orchestrator | ++ hash -r 2026-03-19 01:30:27.805630 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-19 01:30:28.753947 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-19 01:30:28.754896 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-19 01:30:28.756332 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-19 01:30:28.758400 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-19 01:30:28.758894 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-19 01:30:28.769263 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-19 01:30:28.770670 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-19 01:30:28.771748 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-19 01:30:28.773108 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-19 01:30:28.805110 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-19 01:30:28.806654 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-19 01:30:28.809439 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-19 01:30:28.809712 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-19 01:30:28.814121 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-19 01:30:29.016429 | orchestrator | ++ which gilt 2026-03-19 01:30:29.020310 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-19 01:30:29.020418 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-19 01:30:29.228332 | orchestrator | osism.cfg-generics: 2026-03-19 01:30:29.356584 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-19 01:30:29.356702 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-19 01:30:29.356731 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-19 01:30:29.356745 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-19 01:30:30.086404 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-19 01:30:30.096181 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-19 01:30:30.428147 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-19 01:30:30.471578 | orchestrator | ~ 2026-03-19 01:30:30.471691 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 01:30:30.471706 | orchestrator | + deactivate 2026-03-19 01:30:30.471719 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-19 01:30:30.471732 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:30:30.471743 | orchestrator | + export PATH 2026-03-19 01:30:30.471755 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-19 01:30:30.471766 | orchestrator | + '[' -n '' ']' 2026-03-19 01:30:30.471781 | orchestrator | + hash -r 2026-03-19 01:30:30.471800 | orchestrator | + '[' -n '' ']' 2026-03-19 01:30:30.471816 | orchestrator | + unset VIRTUAL_ENV 2026-03-19 01:30:30.471831 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-19 01:30:30.471848 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-19 01:30:30.471863 | orchestrator | + unset -f deactivate 2026-03-19 01:30:30.471880 | orchestrator | + popd 2026-03-19 01:30:30.472654 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-19 01:30:30.472746 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-19 01:30:30.472941 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-19 01:30:30.516898 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 01:30:30.516983 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-19 01:30:30.517792 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-19 01:30:30.570719 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 01:30:30.571416 | orchestrator | ++ semver 2024.2 2025.1 2026-03-19 01:30:30.623279 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 01:30:30.623449 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-19 01:30:30.710345 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 01:30:30.710501 | orchestrator | + source /opt/venv/bin/activate 2026-03-19 01:30:30.710527 | orchestrator | ++ deactivate nondestructive 2026-03-19 01:30:30.710544 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:30.710560 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:30.710576 | orchestrator | ++ hash -r 2026-03-19 01:30:30.710594 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:30.710611 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-19 01:30:30.710629 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-19 01:30:30.710640 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-19 01:30:30.710651 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-19 01:30:30.710676 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-19 01:30:30.710686 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-19 01:30:30.710705 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-19 01:30:30.710716 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:30:30.710765 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:30:30.710777 | orchestrator | ++ export PATH 2026-03-19 01:30:30.710786 | orchestrator | ++ '[' -n '' ']' 2026-03-19 01:30:30.710796 | orchestrator | ++ '[' -z '' ']' 2026-03-19 01:30:30.710806 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-19 01:30:30.710815 | orchestrator | ++ PS1='(venv) ' 2026-03-19 01:30:30.710825 | orchestrator | ++ export PS1 2026-03-19 01:30:30.710834 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-19 01:30:30.710844 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-19 01:30:30.710853 | orchestrator | ++ hash -r 2026-03-19 01:30:30.710863 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-19 01:30:31.706284 | orchestrator | 2026-03-19 01:30:31.706438 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-19 01:30:31.706460 | orchestrator | 2026-03-19 01:30:31.706473 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 01:30:32.226793 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:32.226903 | orchestrator | 2026-03-19 01:30:32.226917 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-19 01:30:33.196244 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:33.196478 | orchestrator | 2026-03-19 01:30:33.196499 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-19 01:30:33.196551 | orchestrator | 2026-03-19 01:30:33.196563 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:30:35.404862 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:35.404988 | orchestrator | 2026-03-19 01:30:35.405016 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-19 01:30:35.439289 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:35.439417 | orchestrator | 2026-03-19 01:30:35.439436 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-19 01:30:35.892857 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:35.892965 | orchestrator | 2026-03-19 01:30:35.892984 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-19 01:30:35.927973 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:30:35.928070 | orchestrator | 2026-03-19 01:30:35.928084 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-19 01:30:36.253461 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:36.253573 | orchestrator | 2026-03-19 01:30:36.253590 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-19 01:30:36.576740 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:36.576852 | orchestrator | 2026-03-19 01:30:36.576869 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-19 01:30:36.687877 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:30:36.687985 | orchestrator | 2026-03-19 01:30:36.688001 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-19 01:30:36.688014 | orchestrator | 2026-03-19 01:30:36.688026 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:30:38.411231 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:38.411339 | orchestrator | 2026-03-19 01:30:38.411415 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-19 01:30:38.492678 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-19 01:30:38.492784 | orchestrator | 2026-03-19 01:30:38.492799 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-19 01:30:38.557079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-19 01:30:38.557184 | orchestrator | 2026-03-19 01:30:38.557197 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-19 01:30:39.643106 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-19 01:30:39.643239 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-19 01:30:39.643260 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-19 01:30:39.643277 | orchestrator | 2026-03-19 01:30:39.643296 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-19 01:30:41.392055 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-19 01:30:41.392149 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-19 01:30:41.392160 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-19 01:30:41.392168 | orchestrator | 2026-03-19 01:30:41.392177 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-19 01:30:42.014982 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:30:42.015066 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:42.015072 | orchestrator | 2026-03-19 01:30:42.015077 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-19 01:30:42.636727 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:30:42.636824 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:42.636835 | orchestrator | 2026-03-19 01:30:42.636844 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-19 01:30:42.688096 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:30:42.688222 | orchestrator | 2026-03-19 01:30:42.688245 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-19 01:30:43.040171 | orchestrator | ok: [testbed-manager] 2026-03-19 01:30:43.040284 | orchestrator | 2026-03-19 01:30:43.040301 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-19 01:30:43.101829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-19 01:30:43.101939 | orchestrator | 2026-03-19 01:30:43.101957 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-19 01:30:44.134763 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:44.134870 | orchestrator | 2026-03-19 01:30:44.134887 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-19 01:30:44.901166 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:44.901282 | orchestrator | 2026-03-19 01:30:44.901306 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-19 01:30:58.592615 | orchestrator | changed: [testbed-manager] 2026-03-19 01:30:58.592737 | orchestrator | 2026-03-19 01:30:58.592756 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-19 01:30:58.636706 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:30:58.636819 | orchestrator | 2026-03-19 01:30:58.636862 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-19 01:30:58.636877 | orchestrator | 2026-03-19 01:30:58.636888 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:31:00.528007 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:00.528107 | orchestrator | 2026-03-19 01:31:00.528118 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-19 01:31:00.630201 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-19 01:31:00.630288 | orchestrator | 2026-03-19 01:31:00.630296 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-19 01:31:00.681797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 01:31:00.681871 | orchestrator | 2026-03-19 01:31:00.681878 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-19 01:31:02.943811 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:02.943952 | orchestrator | 2026-03-19 01:31:02.943981 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-19 01:31:02.987570 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:02.988059 | orchestrator | 2026-03-19 01:31:02.988074 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-19 01:31:03.103726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-19 01:31:03.103827 | orchestrator | 2026-03-19 01:31:03.103835 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-19 01:31:05.931003 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-19 01:31:05.931111 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-19 01:31:05.931128 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-19 01:31:05.931141 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-19 01:31:05.931152 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-19 01:31:05.931163 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-19 01:31:05.931174 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-19 01:31:05.931185 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-19 01:31:05.931197 | orchestrator | 2026-03-19 01:31:05.931209 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-19 01:31:06.560895 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:06.560981 | orchestrator | 2026-03-19 01:31:06.560993 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-19 01:31:07.180909 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:07.181022 | orchestrator | 2026-03-19 01:31:07.181038 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-19 01:31:07.247493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-19 01:31:07.247632 | orchestrator | 2026-03-19 01:31:07.247658 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-19 01:31:08.402553 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-19 01:31:08.402671 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-19 01:31:08.402683 | orchestrator | 2026-03-19 01:31:08.402693 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-19 01:31:09.019088 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:09.019192 | orchestrator | 2026-03-19 01:31:09.019207 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-19 01:31:09.065799 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:31:09.065920 | orchestrator | 2026-03-19 01:31:09.065946 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-19 01:31:09.145439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-19 01:31:09.145538 | orchestrator | 2026-03-19 01:31:09.145552 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-19 01:31:09.773083 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:09.773190 | orchestrator | 2026-03-19 01:31:09.773207 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-19 01:31:09.839428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-19 01:31:09.839545 | orchestrator | 2026-03-19 01:31:09.839563 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-19 01:31:11.208787 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:31:11.208907 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:31:11.208925 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:11.208939 | orchestrator | 2026-03-19 01:31:11.208951 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-19 01:31:11.820473 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:11.820585 | orchestrator | 2026-03-19 01:31:11.820603 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-19 01:31:11.866725 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:31:11.866850 | orchestrator | 2026-03-19 01:31:11.866875 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-19 01:31:11.955970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-19 01:31:11.956074 | orchestrator | 2026-03-19 01:31:11.956089 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-19 01:31:12.463925 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:12.464032 | orchestrator | 2026-03-19 01:31:12.464048 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-19 01:31:12.851672 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:12.851789 | orchestrator | 2026-03-19 01:31:12.851810 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-19 01:31:14.059996 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-19 01:31:14.060131 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-19 01:31:14.060151 | orchestrator | 2026-03-19 01:31:14.060163 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-19 01:31:14.695593 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:14.695710 | orchestrator | 2026-03-19 01:31:14.695727 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-19 01:31:15.067032 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:15.067125 | orchestrator | 2026-03-19 01:31:15.067141 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-19 01:31:15.410418 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:15.410534 | orchestrator | 2026-03-19 01:31:15.410550 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-19 01:31:15.462472 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:31:15.462567 | orchestrator | 2026-03-19 01:31:15.462579 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-19 01:31:15.533291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-19 01:31:15.533458 | orchestrator | 2026-03-19 01:31:15.533472 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-19 01:31:15.573971 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:15.574121 | orchestrator | 2026-03-19 01:31:15.574136 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-19 01:31:17.550215 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-19 01:31:17.550321 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-19 01:31:17.550391 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-19 01:31:17.550404 | orchestrator | 2026-03-19 01:31:17.550417 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-19 01:31:18.225167 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:18.225281 | orchestrator | 2026-03-19 01:31:18.225298 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-19 01:31:18.922694 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:18.922814 | orchestrator | 2026-03-19 01:31:18.922834 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-19 01:31:19.609905 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:19.610068 | orchestrator | 2026-03-19 01:31:19.610085 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-19 01:31:19.683408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-19 01:31:19.683524 | orchestrator | 2026-03-19 01:31:19.683548 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-19 01:31:19.718665 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:19.718764 | orchestrator | 2026-03-19 01:31:19.718778 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-19 01:31:20.394468 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-19 01:31:20.394580 | orchestrator | 2026-03-19 01:31:20.394597 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-19 01:31:20.482920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-19 01:31:20.483027 | orchestrator | 2026-03-19 01:31:20.483042 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-19 01:31:21.171414 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:21.171517 | orchestrator | 2026-03-19 01:31:21.171530 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-19 01:31:21.797886 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:21.797988 | orchestrator | 2026-03-19 01:31:21.798001 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-19 01:31:21.838862 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:31:21.838985 | orchestrator | 2026-03-19 01:31:21.839008 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-19 01:31:21.893045 | orchestrator | ok: [testbed-manager] 2026-03-19 01:31:21.893146 | orchestrator | 2026-03-19 01:31:21.893159 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-19 01:31:22.688905 | orchestrator | changed: [testbed-manager] 2026-03-19 01:31:22.689034 | orchestrator | 2026-03-19 01:31:22.689056 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-19 01:32:30.863653 | orchestrator | changed: [testbed-manager] 2026-03-19 01:32:30.863805 | orchestrator | 2026-03-19 01:32:30.863832 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-19 01:32:31.838081 | orchestrator | ok: [testbed-manager] 2026-03-19 01:32:31.838167 | orchestrator | 2026-03-19 01:32:31.838174 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-19 01:32:31.891920 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:32:31.892020 | orchestrator | 2026-03-19 01:32:31.892035 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-19 01:32:34.103914 | orchestrator | changed: [testbed-manager] 2026-03-19 01:32:34.104014 | orchestrator | 2026-03-19 01:32:34.104029 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-19 01:32:34.155820 | orchestrator | ok: [testbed-manager] 2026-03-19 01:32:34.155934 | orchestrator | 2026-03-19 01:32:34.155949 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 01:32:34.155960 | orchestrator | 2026-03-19 01:32:34.155970 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-19 01:32:34.290945 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:32:34.291044 | orchestrator | 2026-03-19 01:32:34.291057 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-19 01:33:34.350635 | orchestrator | Pausing for 60 seconds 2026-03-19 01:33:34.350793 | orchestrator | changed: [testbed-manager] 2026-03-19 01:33:34.350823 | orchestrator | 2026-03-19 01:33:34.350859 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-19 01:33:37.798891 | orchestrator | changed: [testbed-manager] 2026-03-19 01:33:37.799010 | orchestrator | 2026-03-19 01:33:37.799027 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-19 01:34:19.315920 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-19 01:34:19.316011 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-19 01:34:19.316017 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:19.316023 | orchestrator | 2026-03-19 01:34:19.316047 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-19 01:34:28.913246 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:28.913428 | orchestrator | 2026-03-19 01:34:28.913450 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-19 01:34:29.015212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-19 01:34:29.015368 | orchestrator | 2026-03-19 01:34:29.015386 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 01:34:29.015399 | orchestrator | 2026-03-19 01:34:29.015410 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-19 01:34:29.058922 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:34:29.059047 | orchestrator | 2026-03-19 01:34:29.059073 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-19 01:34:29.117870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-19 01:34:29.117981 | orchestrator | 2026-03-19 01:34:29.117997 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-19 01:34:29.890819 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:29.890945 | orchestrator | 2026-03-19 01:34:29.890968 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-19 01:34:32.875290 | orchestrator | ok: [testbed-manager] 2026-03-19 01:34:32.875420 | orchestrator | 2026-03-19 01:34:32.875438 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-19 01:34:32.946455 | orchestrator | ok: [testbed-manager] => { 2026-03-19 01:34:32.946561 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-19 01:34:32.946589 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-19 01:34:32.946609 | orchestrator | "Checking running containers against expected versions...", 2026-03-19 01:34:32.946632 | orchestrator | "", 2026-03-19 01:34:32.946651 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-19 01:34:32.946672 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-19 01:34:32.946692 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.946712 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-19 01:34:32.946732 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.946752 | orchestrator | "", 2026-03-19 01:34:32.946772 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-19 01:34:32.946792 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-19 01:34:32.946812 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.946862 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-19 01:34:32.946885 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.946907 | orchestrator | "", 2026-03-19 01:34:32.946926 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-19 01:34:32.946945 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-19 01:34:32.946967 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.946989 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-19 01:34:32.947010 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947032 | orchestrator | "", 2026-03-19 01:34:32.947053 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-19 01:34:32.947071 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-19 01:34:32.947090 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947108 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-19 01:34:32.947127 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947179 | orchestrator | "", 2026-03-19 01:34:32.947199 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-19 01:34:32.947224 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-19 01:34:32.947247 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947270 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-19 01:34:32.947291 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947373 | orchestrator | "", 2026-03-19 01:34:32.947397 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-19 01:34:32.947416 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.947435 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947454 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.947474 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947493 | orchestrator | "", 2026-03-19 01:34:32.947512 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-19 01:34:32.947532 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 01:34:32.947551 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947570 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 01:34:32.947589 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947609 | orchestrator | "", 2026-03-19 01:34:32.947629 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-19 01:34:32.947649 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 01:34:32.947670 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947690 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 01:34:32.947710 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947729 | orchestrator | "", 2026-03-19 01:34:32.947749 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-19 01:34:32.947768 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-19 01:34:32.947787 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947805 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-19 01:34:32.947823 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947841 | orchestrator | "", 2026-03-19 01:34:32.947858 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-19 01:34:32.947875 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 01:34:32.947892 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.947931 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 01:34:32.947949 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.947966 | orchestrator | "", 2026-03-19 01:34:32.947983 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-19 01:34:32.948000 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948017 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.948051 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948067 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.948083 | orchestrator | "", 2026-03-19 01:34:32.948099 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-19 01:34:32.948116 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948133 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.948150 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948167 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.948184 | orchestrator | "", 2026-03-19 01:34:32.948202 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-19 01:34:32.948220 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948239 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.948257 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948276 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.948293 | orchestrator | "", 2026-03-19 01:34:32.948308 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-19 01:34:32.948346 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948363 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.948380 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948418 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.948436 | orchestrator | "", 2026-03-19 01:34:32.948452 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-19 01:34:32.948469 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948485 | orchestrator | " Enabled: true", 2026-03-19 01:34:32.948514 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-19 01:34:32.948533 | orchestrator | " Status: ✅ MATCH", 2026-03-19 01:34:32.948549 | orchestrator | "", 2026-03-19 01:34:32.948566 | orchestrator | "=== Summary ===", 2026-03-19 01:34:32.948583 | orchestrator | "Errors (version mismatches): 0", 2026-03-19 01:34:32.948600 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-19 01:34:32.948616 | orchestrator | "", 2026-03-19 01:34:32.948632 | orchestrator | "✅ All running containers match expected versions!" 2026-03-19 01:34:32.948649 | orchestrator | ] 2026-03-19 01:34:32.948666 | orchestrator | } 2026-03-19 01:34:32.948683 | orchestrator | 2026-03-19 01:34:32.948699 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-19 01:34:32.997625 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:34:32.997711 | orchestrator | 2026-03-19 01:34:32.997730 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:34:32.997745 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-19 01:34:32.997759 | orchestrator | 2026-03-19 01:34:33.095800 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 01:34:33.095906 | orchestrator | + deactivate 2026-03-19 01:34:33.095934 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-19 01:34:33.095956 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 01:34:33.095977 | orchestrator | + export PATH 2026-03-19 01:34:33.095998 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-19 01:34:33.096018 | orchestrator | + '[' -n '' ']' 2026-03-19 01:34:33.096039 | orchestrator | + hash -r 2026-03-19 01:34:33.096058 | orchestrator | + '[' -n '' ']' 2026-03-19 01:34:33.096078 | orchestrator | + unset VIRTUAL_ENV 2026-03-19 01:34:33.096098 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-19 01:34:33.096118 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-19 01:34:33.096138 | orchestrator | + unset -f deactivate 2026-03-19 01:34:33.096158 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-19 01:34:33.103120 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 01:34:33.103207 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-19 01:34:33.103222 | orchestrator | + local max_attempts=60 2026-03-19 01:34:33.103235 | orchestrator | + local name=ceph-ansible 2026-03-19 01:34:33.103273 | orchestrator | + local attempt_num=1 2026-03-19 01:34:33.103955 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:34:33.142111 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:34:33.142195 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-19 01:34:33.142218 | orchestrator | + local max_attempts=60 2026-03-19 01:34:33.142239 | orchestrator | + local name=kolla-ansible 2026-03-19 01:34:33.142258 | orchestrator | + local attempt_num=1 2026-03-19 01:34:33.142937 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-19 01:34:33.173947 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:34:33.174084 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-19 01:34:33.174103 | orchestrator | + local max_attempts=60 2026-03-19 01:34:33.174115 | orchestrator | + local name=osism-ansible 2026-03-19 01:34:33.174127 | orchestrator | + local attempt_num=1 2026-03-19 01:34:33.175357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-19 01:34:33.208432 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:34:33.208488 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 01:34:33.208495 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-19 01:34:33.878173 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-19 01:34:34.036249 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-19 01:34:34.036347 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036364 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036375 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-19 01:34:34.036387 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-03-19 01:34:34.036399 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036424 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036431 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 56 seconds (healthy) 2026-03-19 01:34:34.036438 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036444 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-03-19 01:34:34.036450 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036456 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-03-19 01:34:34.036462 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036483 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-19 01:34:34.036490 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.036496 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-03-19 01:34:34.042167 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-19 01:34:34.088690 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 01:34:34.088758 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-19 01:34:34.094678 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-19 01:34:46.439175 | orchestrator | 2026-03-19 01:34:46 | INFO  | Task 08a676c1-daaf-45a8-ae14-b7aa1b11e9dd (resolvconf) was prepared for execution. 2026-03-19 01:34:46.439423 | orchestrator | 2026-03-19 01:34:46 | INFO  | It takes a moment until task 08a676c1-daaf-45a8-ae14-b7aa1b11e9dd (resolvconf) has been started and output is visible here. 2026-03-19 01:34:59.290474 | orchestrator | 2026-03-19 01:34:59.290605 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-19 01:34:59.290627 | orchestrator | 2026-03-19 01:34:59.290643 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:34:59.290658 | orchestrator | Thursday 19 March 2026 01:34:50 +0000 (0:00:00.101) 0:00:00.101 ******** 2026-03-19 01:34:59.290667 | orchestrator | ok: [testbed-manager] 2026-03-19 01:34:59.290677 | orchestrator | 2026-03-19 01:34:59.290685 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-19 01:34:59.290694 | orchestrator | Thursday 19 March 2026 01:34:53 +0000 (0:00:03.338) 0:00:03.440 ******** 2026-03-19 01:34:59.290702 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:34:59.290711 | orchestrator | 2026-03-19 01:34:59.290719 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-19 01:34:59.290731 | orchestrator | Thursday 19 March 2026 01:34:53 +0000 (0:00:00.041) 0:00:03.481 ******** 2026-03-19 01:34:59.290745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-19 01:34:59.290759 | orchestrator | 2026-03-19 01:34:59.290771 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-19 01:34:59.290783 | orchestrator | Thursday 19 March 2026 01:34:53 +0000 (0:00:00.081) 0:00:03.563 ******** 2026-03-19 01:34:59.290796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 01:34:59.290808 | orchestrator | 2026-03-19 01:34:59.290820 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-19 01:34:59.290857 | orchestrator | Thursday 19 March 2026 01:34:53 +0000 (0:00:00.064) 0:00:03.628 ******** 2026-03-19 01:34:59.290872 | orchestrator | ok: [testbed-manager] 2026-03-19 01:34:59.290886 | orchestrator | 2026-03-19 01:34:59.290899 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-19 01:34:59.290913 | orchestrator | Thursday 19 March 2026 01:34:54 +0000 (0:00:01.031) 0:00:04.660 ******** 2026-03-19 01:34:59.290927 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:34:59.290939 | orchestrator | 2026-03-19 01:34:59.290953 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-19 01:34:59.290968 | orchestrator | Thursday 19 March 2026 01:34:54 +0000 (0:00:00.065) 0:00:04.726 ******** 2026-03-19 01:34:59.290983 | orchestrator | ok: [testbed-manager] 2026-03-19 01:34:59.290996 | orchestrator | 2026-03-19 01:34:59.291009 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-19 01:34:59.291048 | orchestrator | Thursday 19 March 2026 01:34:55 +0000 (0:00:00.489) 0:00:05.216 ******** 2026-03-19 01:34:59.291061 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:34:59.291075 | orchestrator | 2026-03-19 01:34:59.291087 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-19 01:34:59.291101 | orchestrator | Thursday 19 March 2026 01:34:55 +0000 (0:00:00.088) 0:00:05.304 ******** 2026-03-19 01:34:59.291114 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:59.291126 | orchestrator | 2026-03-19 01:34:59.291139 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-19 01:34:59.291151 | orchestrator | Thursday 19 March 2026 01:34:55 +0000 (0:00:00.535) 0:00:05.840 ******** 2026-03-19 01:34:59.291164 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:59.291176 | orchestrator | 2026-03-19 01:34:59.291188 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-19 01:34:59.291201 | orchestrator | Thursday 19 March 2026 01:34:56 +0000 (0:00:01.028) 0:00:06.868 ******** 2026-03-19 01:34:59.291213 | orchestrator | ok: [testbed-manager] 2026-03-19 01:34:59.291225 | orchestrator | 2026-03-19 01:34:59.291238 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-19 01:34:59.291250 | orchestrator | Thursday 19 March 2026 01:34:57 +0000 (0:00:00.934) 0:00:07.803 ******** 2026-03-19 01:34:59.291263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-19 01:34:59.291276 | orchestrator | 2026-03-19 01:34:59.291288 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-19 01:34:59.291301 | orchestrator | Thursday 19 March 2026 01:34:57 +0000 (0:00:00.092) 0:00:07.895 ******** 2026-03-19 01:34:59.291339 | orchestrator | changed: [testbed-manager] 2026-03-19 01:34:59.291353 | orchestrator | 2026-03-19 01:34:59.291366 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:34:59.291380 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 01:34:59.291392 | orchestrator | 2026-03-19 01:34:59.291404 | orchestrator | 2026-03-19 01:34:59.291416 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:34:59.291428 | orchestrator | Thursday 19 March 2026 01:34:59 +0000 (0:00:01.166) 0:00:09.062 ******** 2026-03-19 01:34:59.291441 | orchestrator | =============================================================================== 2026-03-19 01:34:59.291453 | orchestrator | Gathering Facts --------------------------------------------------------- 3.34s 2026-03-19 01:34:59.291465 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-03-19 01:34:59.291477 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.03s 2026-03-19 01:34:59.291489 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2026-03-19 01:34:59.291501 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-03-19 01:34:59.291514 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-03-19 01:34:59.291547 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-03-19 01:34:59.291560 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-19 01:34:59.291573 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-19 01:34:59.291585 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-19 01:34:59.291597 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-19 01:34:59.291610 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-19 01:34:59.291622 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.04s 2026-03-19 01:34:59.550351 | orchestrator | + osism apply sshconfig 2026-03-19 01:35:11.568218 | orchestrator | 2026-03-19 01:35:11 | INFO  | Task 9c028827-d4b9-49da-b01e-1158867d61b4 (sshconfig) was prepared for execution. 2026-03-19 01:35:11.569451 | orchestrator | 2026-03-19 01:35:11 | INFO  | It takes a moment until task 9c028827-d4b9-49da-b01e-1158867d61b4 (sshconfig) has been started and output is visible here. 2026-03-19 01:35:22.936216 | orchestrator | 2026-03-19 01:35:22.936354 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-19 01:35:22.936370 | orchestrator | 2026-03-19 01:35:22.936380 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-19 01:35:22.936389 | orchestrator | Thursday 19 March 2026 01:35:15 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-03-19 01:35:22.936398 | orchestrator | ok: [testbed-manager] 2026-03-19 01:35:22.936408 | orchestrator | 2026-03-19 01:35:22.936440 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-19 01:35:22.936449 | orchestrator | Thursday 19 March 2026 01:35:16 +0000 (0:00:00.523) 0:00:00.675 ******** 2026-03-19 01:35:22.936459 | orchestrator | changed: [testbed-manager] 2026-03-19 01:35:22.936468 | orchestrator | 2026-03-19 01:35:22.936477 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-19 01:35:22.936487 | orchestrator | Thursday 19 March 2026 01:35:16 +0000 (0:00:00.497) 0:00:01.172 ******** 2026-03-19 01:35:22.936495 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-19 01:35:22.936505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-19 01:35:22.936514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-19 01:35:22.936523 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-19 01:35:22.936531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-19 01:35:22.936540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-19 01:35:22.936549 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-19 01:35:22.936558 | orchestrator | 2026-03-19 01:35:22.936567 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-19 01:35:22.936576 | orchestrator | Thursday 19 March 2026 01:35:22 +0000 (0:00:05.543) 0:00:06.715 ******** 2026-03-19 01:35:22.936585 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:35:22.936594 | orchestrator | 2026-03-19 01:35:22.936603 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-19 01:35:22.936612 | orchestrator | Thursday 19 March 2026 01:35:22 +0000 (0:00:00.074) 0:00:06.790 ******** 2026-03-19 01:35:22.936621 | orchestrator | changed: [testbed-manager] 2026-03-19 01:35:22.936630 | orchestrator | 2026-03-19 01:35:22.936638 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:35:22.936647 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:35:22.936655 | orchestrator | 2026-03-19 01:35:22.936664 | orchestrator | 2026-03-19 01:35:22.936673 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:35:22.936682 | orchestrator | Thursday 19 March 2026 01:35:22 +0000 (0:00:00.570) 0:00:07.360 ******** 2026-03-19 01:35:22.936691 | orchestrator | =============================================================================== 2026-03-19 01:35:22.936701 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.54s 2026-03-19 01:35:22.936709 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-19 01:35:22.936718 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2026-03-19 01:35:22.936727 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2026-03-19 01:35:22.936736 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-19 01:35:23.189800 | orchestrator | + osism apply known-hosts 2026-03-19 01:35:35.135726 | orchestrator | 2026-03-19 01:35:35 | INFO  | Task f9811ff0-a6d8-447d-a615-ebf0ef638e5e (known-hosts) was prepared for execution. 2026-03-19 01:35:35.135821 | orchestrator | 2026-03-19 01:35:35 | INFO  | It takes a moment until task f9811ff0-a6d8-447d-a615-ebf0ef638e5e (known-hosts) has been started and output is visible here. 2026-03-19 01:35:51.607809 | orchestrator | 2026-03-19 01:35:51.607929 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-19 01:35:51.607947 | orchestrator | 2026-03-19 01:35:51.607961 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-19 01:35:51.607982 | orchestrator | Thursday 19 March 2026 01:35:39 +0000 (0:00:00.160) 0:00:00.161 ******** 2026-03-19 01:35:51.608001 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-19 01:35:51.608020 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-19 01:35:51.608038 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-19 01:35:51.608056 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-19 01:35:51.608072 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-19 01:35:51.608090 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-19 01:35:51.608105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-19 01:35:51.608121 | orchestrator | 2026-03-19 01:35:51.608138 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-19 01:35:51.608155 | orchestrator | Thursday 19 March 2026 01:35:45 +0000 (0:00:05.880) 0:00:06.041 ******** 2026-03-19 01:35:51.608174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-19 01:35:51.608194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-19 01:35:51.608212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-19 01:35:51.608231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-19 01:35:51.608248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-19 01:35:51.608265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-19 01:35:51.608365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-19 01:35:51.608389 | orchestrator | 2026-03-19 01:35:51.608406 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.608425 | orchestrator | Thursday 19 March 2026 01:35:45 +0000 (0:00:00.165) 0:00:06.207 ******** 2026-03-19 01:35:51.608456 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH1EWd15oP16IOdKoNOdoe/Fz2JcR02CkcgmkjjxEhP4VUQfB7UyUNn7B4rdTSKnDkigFcuziA6TfRWRs7Fj7fRLjBPj1BE0RbDDqbHTlu8sZt99bjSXEz1jy1+FMPp2pxXT6arozPQsWHGnAPdleGoEBaY3pzRMxeLn46U6aEw0kULMDcoqZVYi/fLLxrWvVmPTiz2AP1hZVqzV9k0OiskT1KqKHmsGYRJveflrMhrRSXkFeg8nBdmmq23QA5HVrSuXIzuYTu1lMFFNphy7N146cenI+QsYTe8zUySrhbrm1sH91DPeCTQgxecbhlRVO5M4oudQxABk1B4JPJv8ldC6S1cCbe9ejwAn2cOMXF1RivTIk8PK3bZJvHLJ+rWsvXNrqIw0KQnJbq8WlCnR/262RvFhDZWx8k5oARHWUiuWamrKLBCnBPqx4aBArCj8Ve+xkfb9Esfd6o0/MqGj03Dl36eiE9zIA0X7IIWZSSDwY2s/9OToyeMcUagDGEqhE=) 2026-03-19 01:35:51.608516 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXCJDM70rHh90T13MKofGTR2G5mUxjTcrjsACs7s9ATg0Sj4b23H5R1899TUjXF4h8HOLdhvlfbvZ0Y6Dw0cHo=) 2026-03-19 01:35:51.608534 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIPXO48HrvPNJZRpsaHjz+q6kAi0GlT6DBNmzdJ1t0T) 2026-03-19 01:35:51.608546 | orchestrator | 2026-03-19 01:35:51.608557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.608568 | orchestrator | Thursday 19 March 2026 01:35:46 +0000 (0:00:01.126) 0:00:07.333 ******** 2026-03-19 01:35:51.608600 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMeAEFBp1CDphnPk1AcG1WOTfzMWo6nvPfHai5y5Zrxr6D5+2yoxPhJTMkzqG6THYMtcwTHCJJi59lB2lOnqDjC84l9NHG0aaTKxG93E9Fd21Wd1FEEUF6y6C4wGT6DrppbBijRrYs0wxoZpIF2ifRJeEBB7mL2zIMoMQuLWO2aTjMreCDnnzzBduJ4q9f1vOKDPkHWSRysdp/4f0GDOzMA0GiPcyLfYjEpwgN/1hi8LyeVvgYSfAQNufyKIv+H/CvWaCQWFCdHpApfhP2byOklqpT0KGXORflWwbZm7ncOwX6fRsI1X8p+JK8gVvfqxDwrsC+HgMy5cBCdCGCcXlB1TPbeU92VB/JnhJnxuJ10APMKcodAR8EVSXuEw12t08SRMwAoaTYO62IOtjWwgt6iy+vYBPCpFJSjRVDME8yETq7DUS3W9sMU4fmMIat5x6YCHzdHJnLfG0ukEPq+t3H1U2sDTR0om8WJ0cpyTA6LZv3ZCng+h/L54LEbPjQvoc=) 2026-03-19 01:35:51.608613 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAXj6aJR3w9jgoh914daYFtefQ23mBy/Laa3qmDkGTDU) 2026-03-19 01:35:51.608624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGXQZ6CzQ8B9ank5Vz1WYrwrbJdGr7E7BVEuPdT1AHlb5LSu07rcSPLQSfv8/4JV/ybEpZtJSh1OpWiZIsr7AxA=) 2026-03-19 01:35:51.608635 | orchestrator | 2026-03-19 01:35:51.608646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.608657 | orchestrator | Thursday 19 March 2026 01:35:47 +0000 (0:00:01.058) 0:00:08.392 ******** 2026-03-19 01:35:51.608668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5yXQmcUr77qKIgumH+jBceu0FdyjPQxXOFZDGA4BZ/micv9e+SwgnC5u4QYpQPCWc4dvIl00+CjdOJgqk6Kg9++foKIq44IQI6HwF4xYrN6vemjKX/C+JXkdPaV+17OPMUPg6ceGlYUoqGFrSHNTZffEb9CQ4kFWTSC0Nx7PdCo1XAmdEk2xZol3uuHP5ntA5ZfzYeDnmD+yPEmHFBLAdvZje9vYP3mtMHoZiWIDaLY1KLlujwQjbxdZMsHcn04nWq4GKLaX5CjaG8id2H3pA1DM33040xItWdsBMbgr5NhWDuSu19/2I6891pgeStOL1kP7DfdmiJZLaJ2N8d5IC3/45lK2L9XV3gIHTQeMa8ckl4KL9T8Z/Jfxrs4ad53nZY9s1nXcTa0xykIz1+heo7kjAHOtEgzLgp6Vl8oKiq4IurdLCsh53+M86tCDKrMwkxUAzYLhxQg8KQPzEo+p+hM9IfSPHeaHMbV9vvRY89Gg6sRPEU9EOYdqMqf/AjuE=) 2026-03-19 01:35:51.608680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqs1PKNUzTqX9wuum2/KHl4d97RaGOfKHgwBmeg/67R8x49uC7XG3xqWoTa23Xt8aajZv6XVVlLas5PNCZ8nwM=) 2026-03-19 01:35:51.608691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHETR6TOCy7rUt0l3sA54oMgeazXkesLQna68C7bCWy) 2026-03-19 01:35:51.608703 | orchestrator | 2026-03-19 01:35:51.608718 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.608736 | orchestrator | Thursday 19 March 2026 01:35:48 +0000 (0:00:01.047) 0:00:09.439 ******** 2026-03-19 01:35:51.608754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC627W9Phbyzxku5NoouZeQU43Bmz8nN5ru6iotyp/wNpyc8h1ouzdawXXd4HeEOIPe2SjC1yGD+56cnfLhNQLhn4OyHIHsHjA04STvrTxscwaaCcGcs93Jiru3eLjX7mPQnxXmFrIjzSLmyWx5SzKaNXzI9h1WNQy/anT+D6t/hDbLpu+d/LPVOyOlgGN2hbHzuu9XcYCnMfNX4YcxeoYXuMiMimYJo0eRKWFqE/2Q6O5p2Je2HYqVzbZKlXoQbQhOalSx9popFxwZvpVUJv5weB7KADoxkt0BY7mcmU3cA2Q4pbrndGDg4RtpyNrGagkYFHOt7M7NT1D4PYH/0qskW9Q0uakt8bz9VnmarXBTPTXv7ZsUY2PqSQBzgO94/aS6PXDpkX4XynvEexz6ucqGFqSMdXakB2UOJjqmj4d9DnwqdhMRwdSzo3H09tdYvkn7w+0gts4viOoi3Rs+d2klOph7LOFhwc5OqCQ0M/6LNGjAg9Ro4++ZackVzf4wPMk=) 2026-03-19 01:35:51.608772 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIx4i4F9F/LhhfGArrscBCwxOOlLUHj3H36lYkfwU/eJMb3zlN8j11KkNlMqybV3wjr80Esa/BOHb6pHrGuGp9c=) 2026-03-19 01:35:51.608803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINcUFmKazKJhYASYMBbORsQlOwnFed5UZD86C520XstQ) 2026-03-19 01:35:51.608821 | orchestrator | 2026-03-19 01:35:51.608838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.608857 | orchestrator | Thursday 19 March 2026 01:35:49 +0000 (0:00:01.051) 0:00:10.491 ******** 2026-03-19 01:35:51.608876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMiJ8YKHCZDMPsCoMfFCBupHDA+5Gk1IAI/H7qAHCib/) 2026-03-19 01:35:51.609072 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSsWW9xG3+QNLma8NVLsx5L9v+7FihUIF+dhWMOScnblk0w4PnQbxb7E7tl8yI3tuzi/QHCxUetrsAlNrqr48S726xmnfKc/xvG5qOvdgOIIecaA14Fdn1a1xJ0D2tqeH9ObGwHBE6W0pfTBi4ZrH5ObakuXc0HfjFFq5SffAquo9Tp09e4QSnErOmfVWd5ZfQDVBueo42LQwS28w4e0RQMdmPqc8ZqiGHN+c/jOdyMBDaYtnSaek/yzqEdUiLRwCirEb217vuCKwiEFWJMRuCBzFzO4lTpD8HiE/MqvHofRwDfUIoFCZmjtm1qnIKXISfYtCzYZh9qpbMLgKI3l2BsUorQ5R+vRNUscn17+vwgADvjLfzYSvazwdQTaL7TzYFlm0wxaLfma4dnF8tfNriRz74luM21ME21amPYWb5MzzUGcQDjKAG7ST7z0FYYUod3bQmYcQYPD7C6BhdaC4auEDny2huDdEEDLQCuVoSWihAes/cEKxTfcsBbglg60k=) 2026-03-19 01:35:51.609102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPWR8UxhiEcod22/jx7cVxNl0ckhbREh+sS5KB7wnMb0Ii/mjMJHoRpPUfCaaFJsWdmmeCM3U5jC+Rr9ssmiWqA=) 2026-03-19 01:35:51.609120 | orchestrator | 2026-03-19 01:35:51.609138 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:35:51.609157 | orchestrator | Thursday 19 March 2026 01:35:50 +0000 (0:00:01.069) 0:00:11.561 ******** 2026-03-19 01:35:51.609190 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImS2KB71dQ4BULG8azaPzTE27c8ZKCMObbIliWKcIwRuS28Uc/69ccPIGFgtF0ZT+r8ec4brW2scsmJqWFUor0=) 2026-03-19 01:36:02.173475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIzI8nqx4aMCCDDCUwwLw6CZFffADOIe9HD/86qlipp7/tX5fUTq9UqZAmJEilXYAOzAxy+cfzK8+4JxXXbYA936jlBg237rlYdSsiMK+ypKieesRkl+9e1hxTigNMrsryWFcOGTLWqJvyzjMZKgVpPoFag4KMJc2un+0qQWZ5GlCZQeUBx1z6O18j4v/d8CPFkfXdNGnNpSdl9V8sjJAL0xtL9hoItnMOnCcMLIzbsoIedJmB22Cyt0cP6YmRQH5XhuwsMvY3hEbLXdlLUpZw0qGN3/sO9IotMTXkwaV7GqQMAfT04P6pIHxoG+W1Xj3iDMRBaJ15FOLHVcWMvb89DVpvufva9eZ2J/g7ESFE6zMKmNxK9xEZsrpsllk0cWYjHpfxnXH6G5Qy0R+oIqW7zlEEs7X13tEiuFVUEBi2JTTBCFLgedjGJdVQ9hnmmB7ZV9UWxfqpYUxNKfv6UYQJ3q+AgJZ8K6JHYh36iNuALWN38Fc90bwUAQmoEKPFZV8=) 2026-03-19 01:36:02.173608 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEMMlXaQPD7o6C93vVT1vYxFqrzByCFeRI5Nzl1gWVjs) 2026-03-19 01:36:02.173625 | orchestrator | 2026-03-19 01:36:02.173636 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:02.173647 | orchestrator | Thursday 19 March 2026 01:35:51 +0000 (0:00:01.001) 0:00:12.562 ******** 2026-03-19 01:36:02.173657 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1gD+MzumhMxDVcPrWoVmu65gMeSBHoFykHPHztRZIVhzmQZiwzgyxnyHGRfWqk4Bm50lNNKmDK4D5YAXO+ktl5Gd9QoNcELWxQv0v+2sssWqGtpWMQp+tYX8KTnPbubHwe7mFZc74tH4efwqjfKWJqfYDjaz9Tx1tioncHhudS8QDsmXsAlpVNeInrxE6+j2TGTQjwHagn56c2oWrSOCN7ciUCI8fSbXo5ETKUr9vuwWMgcRHLJg2ICIT85wAmRBBR5j7qD+5QIKanRPwDuoogV/Z9Sb008hlKFV17XHrRmVXZnP39lORPqHtXFyLGZQNDDva/6ncpb0HIi6jdX5oS/cr2HNhBYx2ho8KGrWBjRRFemi3+g0K6cjZh2Zs6ZbLvwftqykI21ohfPEqIoUmLHqJXWWxBnV8L0SLhZb6PAeLqkUHK+WzCsVMN3/gw9iBfxxUgnLe/IO57s2A6oE1O+qfOBInviD3auTQT00/86izLyhUnITc0kWlW4cjq7U=) 2026-03-19 01:36:02.173667 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzhzAckNEztgkRc9HRx7Wv7sUQ3MrP0bvG3wPMz26lb93R8PfBuMo8/w4hpFXcABv4/BTZBqvsC/zoSu3qUQpU=) 2026-03-19 01:36:02.173699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMssGCdHdmEvJVBoQ74wHVN6E1suuQNMEm5sc4ii6EVm) 2026-03-19 01:36:02.173708 | orchestrator | 2026-03-19 01:36:02.173717 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-19 01:36:02.173727 | orchestrator | Thursday 19 March 2026 01:35:52 +0000 (0:00:01.029) 0:00:13.591 ******** 2026-03-19 01:36:02.173737 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-19 01:36:02.173746 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-19 01:36:02.173755 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-19 01:36:02.173763 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-19 01:36:02.173772 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-19 01:36:02.173781 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-19 01:36:02.173789 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-19 01:36:02.173798 | orchestrator | 2026-03-19 01:36:02.173807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-19 01:36:02.173816 | orchestrator | Thursday 19 March 2026 01:35:57 +0000 (0:00:05.204) 0:00:18.795 ******** 2026-03-19 01:36:02.173826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-19 01:36:02.173837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-19 01:36:02.173846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-19 01:36:02.173854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-19 01:36:02.173863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-19 01:36:02.173872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-19 01:36:02.173881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-19 01:36:02.173889 | orchestrator | 2026-03-19 01:36:02.173913 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:02.173922 | orchestrator | Thursday 19 March 2026 01:35:57 +0000 (0:00:00.168) 0:00:18.963 ******** 2026-03-19 01:36:02.173931 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXCJDM70rHh90T13MKofGTR2G5mUxjTcrjsACs7s9ATg0Sj4b23H5R1899TUjXF4h8HOLdhvlfbvZ0Y6Dw0cHo=) 2026-03-19 01:36:02.173948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH1EWd15oP16IOdKoNOdoe/Fz2JcR02CkcgmkjjxEhP4VUQfB7UyUNn7B4rdTSKnDkigFcuziA6TfRWRs7Fj7fRLjBPj1BE0RbDDqbHTlu8sZt99bjSXEz1jy1+FMPp2pxXT6arozPQsWHGnAPdleGoEBaY3pzRMxeLn46U6aEw0kULMDcoqZVYi/fLLxrWvVmPTiz2AP1hZVqzV9k0OiskT1KqKHmsGYRJveflrMhrRSXkFeg8nBdmmq23QA5HVrSuXIzuYTu1lMFFNphy7N146cenI+QsYTe8zUySrhbrm1sH91DPeCTQgxecbhlRVO5M4oudQxABk1B4JPJv8ldC6S1cCbe9ejwAn2cOMXF1RivTIk8PK3bZJvHLJ+rWsvXNrqIw0KQnJbq8WlCnR/262RvFhDZWx8k5oARHWUiuWamrKLBCnBPqx4aBArCj8Ve+xkfb9Esfd6o0/MqGj03Dl36eiE9zIA0X7IIWZSSDwY2s/9OToyeMcUagDGEqhE=) 2026-03-19 01:36:02.173964 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIPXO48HrvPNJZRpsaHjz+q6kAi0GlT6DBNmzdJ1t0T) 2026-03-19 01:36:02.173995 | orchestrator | 2026-03-19 01:36:02.174091 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:02.174110 | orchestrator | Thursday 19 March 2026 01:35:59 +0000 (0:00:01.038) 0:00:20.002 ******** 2026-03-19 01:36:02.174126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMeAEFBp1CDphnPk1AcG1WOTfzMWo6nvPfHai5y5Zrxr6D5+2yoxPhJTMkzqG6THYMtcwTHCJJi59lB2lOnqDjC84l9NHG0aaTKxG93E9Fd21Wd1FEEUF6y6C4wGT6DrppbBijRrYs0wxoZpIF2ifRJeEBB7mL2zIMoMQuLWO2aTjMreCDnnzzBduJ4q9f1vOKDPkHWSRysdp/4f0GDOzMA0GiPcyLfYjEpwgN/1hi8LyeVvgYSfAQNufyKIv+H/CvWaCQWFCdHpApfhP2byOklqpT0KGXORflWwbZm7ncOwX6fRsI1X8p+JK8gVvfqxDwrsC+HgMy5cBCdCGCcXlB1TPbeU92VB/JnhJnxuJ10APMKcodAR8EVSXuEw12t08SRMwAoaTYO62IOtjWwgt6iy+vYBPCpFJSjRVDME8yETq7DUS3W9sMU4fmMIat5x6YCHzdHJnLfG0ukEPq+t3H1U2sDTR0om8WJ0cpyTA6LZv3ZCng+h/L54LEbPjQvoc=) 2026-03-19 01:36:02.174144 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGXQZ6CzQ8B9ank5Vz1WYrwrbJdGr7E7BVEuPdT1AHlb5LSu07rcSPLQSfv8/4JV/ybEpZtJSh1OpWiZIsr7AxA=) 2026-03-19 01:36:02.174162 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAXj6aJR3w9jgoh914daYFtefQ23mBy/Laa3qmDkGTDU) 2026-03-19 01:36:02.174177 | orchestrator | 2026-03-19 01:36:02.174192 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:02.174205 | orchestrator | Thursday 19 March 2026 01:36:00 +0000 (0:00:01.045) 0:00:21.048 ******** 2026-03-19 01:36:02.174218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5yXQmcUr77qKIgumH+jBceu0FdyjPQxXOFZDGA4BZ/micv9e+SwgnC5u4QYpQPCWc4dvIl00+CjdOJgqk6Kg9++foKIq44IQI6HwF4xYrN6vemjKX/C+JXkdPaV+17OPMUPg6ceGlYUoqGFrSHNTZffEb9CQ4kFWTSC0Nx7PdCo1XAmdEk2xZol3uuHP5ntA5ZfzYeDnmD+yPEmHFBLAdvZje9vYP3mtMHoZiWIDaLY1KLlujwQjbxdZMsHcn04nWq4GKLaX5CjaG8id2H3pA1DM33040xItWdsBMbgr5NhWDuSu19/2I6891pgeStOL1kP7DfdmiJZLaJ2N8d5IC3/45lK2L9XV3gIHTQeMa8ckl4KL9T8Z/Jfxrs4ad53nZY9s1nXcTa0xykIz1+heo7kjAHOtEgzLgp6Vl8oKiq4IurdLCsh53+M86tCDKrMwkxUAzYLhxQg8KQPzEo+p+hM9IfSPHeaHMbV9vvRY89Gg6sRPEU9EOYdqMqf/AjuE=) 2026-03-19 01:36:02.174286 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqs1PKNUzTqX9wuum2/KHl4d97RaGOfKHgwBmeg/67R8x49uC7XG3xqWoTa23Xt8aajZv6XVVlLas5PNCZ8nwM=) 2026-03-19 01:36:02.174327 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHETR6TOCy7rUt0l3sA54oMgeazXkesLQna68C7bCWy) 2026-03-19 01:36:02.174342 | orchestrator | 2026-03-19 01:36:02.174356 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:02.174370 | orchestrator | Thursday 19 March 2026 01:36:01 +0000 (0:00:01.056) 0:00:22.104 ******** 2026-03-19 01:36:02.174385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINcUFmKazKJhYASYMBbORsQlOwnFed5UZD86C520XstQ) 2026-03-19 01:36:02.174424 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC627W9Phbyzxku5NoouZeQU43Bmz8nN5ru6iotyp/wNpyc8h1ouzdawXXd4HeEOIPe2SjC1yGD+56cnfLhNQLhn4OyHIHsHjA04STvrTxscwaaCcGcs93Jiru3eLjX7mPQnxXmFrIjzSLmyWx5SzKaNXzI9h1WNQy/anT+D6t/hDbLpu+d/LPVOyOlgGN2hbHzuu9XcYCnMfNX4YcxeoYXuMiMimYJo0eRKWFqE/2Q6O5p2Je2HYqVzbZKlXoQbQhOalSx9popFxwZvpVUJv5weB7KADoxkt0BY7mcmU3cA2Q4pbrndGDg4RtpyNrGagkYFHOt7M7NT1D4PYH/0qskW9Q0uakt8bz9VnmarXBTPTXv7ZsUY2PqSQBzgO94/aS6PXDpkX4XynvEexz6ucqGFqSMdXakB2UOJjqmj4d9DnwqdhMRwdSzo3H09tdYvkn7w+0gts4viOoi3Rs+d2klOph7LOFhwc5OqCQ0M/6LNGjAg9Ro4++ZackVzf4wPMk=) 2026-03-19 01:36:06.444533 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIx4i4F9F/LhhfGArrscBCwxOOlLUHj3H36lYkfwU/eJMb3zlN8j11KkNlMqybV3wjr80Esa/BOHb6pHrGuGp9c=) 2026-03-19 01:36:06.444649 | orchestrator | 2026-03-19 01:36:06.444697 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:06.444711 | orchestrator | Thursday 19 March 2026 01:36:02 +0000 (0:00:01.026) 0:00:23.131 ******** 2026-03-19 01:36:06.444723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPWR8UxhiEcod22/jx7cVxNl0ckhbREh+sS5KB7wnMb0Ii/mjMJHoRpPUfCaaFJsWdmmeCM3U5jC+Rr9ssmiWqA=) 2026-03-19 01:36:06.444737 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSsWW9xG3+QNLma8NVLsx5L9v+7FihUIF+dhWMOScnblk0w4PnQbxb7E7tl8yI3tuzi/QHCxUetrsAlNrqr48S726xmnfKc/xvG5qOvdgOIIecaA14Fdn1a1xJ0D2tqeH9ObGwHBE6W0pfTBi4ZrH5ObakuXc0HfjFFq5SffAquo9Tp09e4QSnErOmfVWd5ZfQDVBueo42LQwS28w4e0RQMdmPqc8ZqiGHN+c/jOdyMBDaYtnSaek/yzqEdUiLRwCirEb217vuCKwiEFWJMRuCBzFzO4lTpD8HiE/MqvHofRwDfUIoFCZmjtm1qnIKXISfYtCzYZh9qpbMLgKI3l2BsUorQ5R+vRNUscn17+vwgADvjLfzYSvazwdQTaL7TzYFlm0wxaLfma4dnF8tfNriRz74luM21ME21amPYWb5MzzUGcQDjKAG7ST7z0FYYUod3bQmYcQYPD7C6BhdaC4auEDny2huDdEEDLQCuVoSWihAes/cEKxTfcsBbglg60k=) 2026-03-19 01:36:06.444751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMiJ8YKHCZDMPsCoMfFCBupHDA+5Gk1IAI/H7qAHCib/) 2026-03-19 01:36:06.444764 | orchestrator | 2026-03-19 01:36:06.444775 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:06.444786 | orchestrator | Thursday 19 March 2026 01:36:03 +0000 (0:00:01.045) 0:00:24.176 ******** 2026-03-19 01:36:06.444798 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIzI8nqx4aMCCDDCUwwLw6CZFffADOIe9HD/86qlipp7/tX5fUTq9UqZAmJEilXYAOzAxy+cfzK8+4JxXXbYA936jlBg237rlYdSsiMK+ypKieesRkl+9e1hxTigNMrsryWFcOGTLWqJvyzjMZKgVpPoFag4KMJc2un+0qQWZ5GlCZQeUBx1z6O18j4v/d8CPFkfXdNGnNpSdl9V8sjJAL0xtL9hoItnMOnCcMLIzbsoIedJmB22Cyt0cP6YmRQH5XhuwsMvY3hEbLXdlLUpZw0qGN3/sO9IotMTXkwaV7GqQMAfT04P6pIHxoG+W1Xj3iDMRBaJ15FOLHVcWMvb89DVpvufva9eZ2J/g7ESFE6zMKmNxK9xEZsrpsllk0cWYjHpfxnXH6G5Qy0R+oIqW7zlEEs7X13tEiuFVUEBi2JTTBCFLgedjGJdVQ9hnmmB7ZV9UWxfqpYUxNKfv6UYQJ3q+AgJZ8K6JHYh36iNuALWN38Fc90bwUAQmoEKPFZV8=) 2026-03-19 01:36:06.444810 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImS2KB71dQ4BULG8azaPzTE27c8ZKCMObbIliWKcIwRuS28Uc/69ccPIGFgtF0ZT+r8ec4brW2scsmJqWFUor0=) 2026-03-19 01:36:06.444821 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEMMlXaQPD7o6C93vVT1vYxFqrzByCFeRI5Nzl1gWVjs) 2026-03-19 01:36:06.444832 | orchestrator | 2026-03-19 01:36:06.444843 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-19 01:36:06.444854 | orchestrator | Thursday 19 March 2026 01:36:04 +0000 (0:00:01.032) 0:00:25.209 ******** 2026-03-19 01:36:06.444865 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMssGCdHdmEvJVBoQ74wHVN6E1suuQNMEm5sc4ii6EVm) 2026-03-19 01:36:06.444896 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1gD+MzumhMxDVcPrWoVmu65gMeSBHoFykHPHztRZIVhzmQZiwzgyxnyHGRfWqk4Bm50lNNKmDK4D5YAXO+ktl5Gd9QoNcELWxQv0v+2sssWqGtpWMQp+tYX8KTnPbubHwe7mFZc74tH4efwqjfKWJqfYDjaz9Tx1tioncHhudS8QDsmXsAlpVNeInrxE6+j2TGTQjwHagn56c2oWrSOCN7ciUCI8fSbXo5ETKUr9vuwWMgcRHLJg2ICIT85wAmRBBR5j7qD+5QIKanRPwDuoogV/Z9Sb008hlKFV17XHrRmVXZnP39lORPqHtXFyLGZQNDDva/6ncpb0HIi6jdX5oS/cr2HNhBYx2ho8KGrWBjRRFemi3+g0K6cjZh2Zs6ZbLvwftqykI21ohfPEqIoUmLHqJXWWxBnV8L0SLhZb6PAeLqkUHK+WzCsVMN3/gw9iBfxxUgnLe/IO57s2A6oE1O+qfOBInviD3auTQT00/86izLyhUnITc0kWlW4cjq7U=) 2026-03-19 01:36:06.444908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzhzAckNEztgkRc9HRx7Wv7sUQ3MrP0bvG3wPMz26lb93R8PfBuMo8/w4hpFXcABv4/BTZBqvsC/zoSu3qUQpU=) 2026-03-19 01:36:06.444919 | orchestrator | 2026-03-19 01:36:06.444931 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-19 01:36:06.444942 | orchestrator | Thursday 19 March 2026 01:36:05 +0000 (0:00:01.025) 0:00:26.235 ******** 2026-03-19 01:36:06.444961 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-19 01:36:06.444973 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-19 01:36:06.444983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-19 01:36:06.445007 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-19 01:36:06.445037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 01:36:06.445048 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-19 01:36:06.445061 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-19 01:36:06.445074 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:36:06.445087 | orchestrator | 2026-03-19 01:36:06.445100 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-19 01:36:06.445113 | orchestrator | Thursday 19 March 2026 01:36:05 +0000 (0:00:00.177) 0:00:26.413 ******** 2026-03-19 01:36:06.445126 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:36:06.445138 | orchestrator | 2026-03-19 01:36:06.445151 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-19 01:36:06.445169 | orchestrator | Thursday 19 March 2026 01:36:05 +0000 (0:00:00.060) 0:00:26.473 ******** 2026-03-19 01:36:06.445183 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:36:06.445196 | orchestrator | 2026-03-19 01:36:06.445208 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-19 01:36:06.445221 | orchestrator | Thursday 19 March 2026 01:36:05 +0000 (0:00:00.048) 0:00:26.522 ******** 2026-03-19 01:36:06.445234 | orchestrator | changed: [testbed-manager] 2026-03-19 01:36:06.445246 | orchestrator | 2026-03-19 01:36:06.445260 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:36:06.445273 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 01:36:06.445287 | orchestrator | 2026-03-19 01:36:06.445300 | orchestrator | 2026-03-19 01:36:06.445366 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:36:06.445379 | orchestrator | Thursday 19 March 2026 01:36:06 +0000 (0:00:00.689) 0:00:27.212 ******** 2026-03-19 01:36:06.445392 | orchestrator | =============================================================================== 2026-03-19 01:36:06.445404 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.88s 2026-03-19 01:36:06.445417 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2026-03-19 01:36:06.445429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-19 01:36:06.445439 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-19 01:36:06.445450 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 01:36:06.445461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-19 01:36:06.445471 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-19 01:36:06.445482 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-19 01:36:06.445493 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-19 01:36:06.445504 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-19 01:36:06.445514 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-19 01:36:06.445525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 01:36:06.445536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 01:36:06.445546 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 01:36:06.445557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-19 01:36:06.445575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-19 01:36:06.445586 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2026-03-19 01:36:06.445597 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-19 01:36:06.445607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-19 01:36:06.445619 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-19 01:36:06.702792 | orchestrator | + osism apply squid 2026-03-19 01:36:18.782977 | orchestrator | 2026-03-19 01:36:18 | INFO  | Task 73346d15-c439-4655-ae15-845d71c381ca (squid) was prepared for execution. 2026-03-19 01:36:18.783112 | orchestrator | 2026-03-19 01:36:18 | INFO  | It takes a moment until task 73346d15-c439-4655-ae15-845d71c381ca (squid) has been started and output is visible here. 2026-03-19 01:38:19.376695 | orchestrator | 2026-03-19 01:38:19.376784 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-19 01:38:19.376792 | orchestrator | 2026-03-19 01:38:19.376796 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-19 01:38:19.376801 | orchestrator | Thursday 19 March 2026 01:36:22 +0000 (0:00:00.160) 0:00:00.160 ******** 2026-03-19 01:38:19.376805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 01:38:19.376811 | orchestrator | 2026-03-19 01:38:19.376815 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-19 01:38:19.376819 | orchestrator | Thursday 19 March 2026 01:36:22 +0000 (0:00:00.080) 0:00:00.240 ******** 2026-03-19 01:38:19.376822 | orchestrator | ok: [testbed-manager] 2026-03-19 01:38:19.376827 | orchestrator | 2026-03-19 01:38:19.376831 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-19 01:38:19.376835 | orchestrator | Thursday 19 March 2026 01:36:24 +0000 (0:00:01.354) 0:00:01.594 ******** 2026-03-19 01:38:19.376840 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-19 01:38:19.376844 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-19 01:38:19.376848 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-19 01:38:19.376851 | orchestrator | 2026-03-19 01:38:19.376855 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-19 01:38:19.376859 | orchestrator | Thursday 19 March 2026 01:36:25 +0000 (0:00:01.116) 0:00:02.711 ******** 2026-03-19 01:38:19.376863 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-19 01:38:19.376867 | orchestrator | 2026-03-19 01:38:19.376871 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-19 01:38:19.376875 | orchestrator | Thursday 19 March 2026 01:36:26 +0000 (0:00:01.044) 0:00:03.755 ******** 2026-03-19 01:38:19.376879 | orchestrator | ok: [testbed-manager] 2026-03-19 01:38:19.376882 | orchestrator | 2026-03-19 01:38:19.376886 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-19 01:38:19.376890 | orchestrator | Thursday 19 March 2026 01:36:26 +0000 (0:00:00.341) 0:00:04.097 ******** 2026-03-19 01:38:19.376894 | orchestrator | changed: [testbed-manager] 2026-03-19 01:38:19.376898 | orchestrator | 2026-03-19 01:38:19.376902 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-19 01:38:19.376906 | orchestrator | Thursday 19 March 2026 01:36:27 +0000 (0:00:00.840) 0:00:04.938 ******** 2026-03-19 01:38:19.376910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-19 01:38:19.376915 | orchestrator | ok: [testbed-manager] 2026-03-19 01:38:19.376921 | orchestrator | 2026-03-19 01:38:19.376925 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-19 01:38:19.376929 | orchestrator | Thursday 19 March 2026 01:37:02 +0000 (0:00:35.055) 0:00:39.993 ******** 2026-03-19 01:38:19.376952 | orchestrator | changed: [testbed-manager] 2026-03-19 01:38:19.376956 | orchestrator | 2026-03-19 01:38:19.376959 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-19 01:38:19.376963 | orchestrator | Thursday 19 March 2026 01:37:18 +0000 (0:00:15.729) 0:00:55.723 ******** 2026-03-19 01:38:19.376967 | orchestrator | Pausing for 60 seconds 2026-03-19 01:38:19.376971 | orchestrator | changed: [testbed-manager] 2026-03-19 01:38:19.376975 | orchestrator | 2026-03-19 01:38:19.376979 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-19 01:38:19.376983 | orchestrator | Thursday 19 March 2026 01:38:18 +0000 (0:01:00.090) 0:01:55.813 ******** 2026-03-19 01:38:19.376986 | orchestrator | ok: [testbed-manager] 2026-03-19 01:38:19.376990 | orchestrator | 2026-03-19 01:38:19.376994 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-19 01:38:19.376997 | orchestrator | Thursday 19 March 2026 01:38:18 +0000 (0:00:00.067) 0:01:55.880 ******** 2026-03-19 01:38:19.377001 | orchestrator | changed: [testbed-manager] 2026-03-19 01:38:19.377005 | orchestrator | 2026-03-19 01:38:19.377008 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:38:19.377012 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:38:19.377016 | orchestrator | 2026-03-19 01:38:19.377020 | orchestrator | 2026-03-19 01:38:19.377024 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:38:19.377027 | orchestrator | Thursday 19 March 2026 01:38:19 +0000 (0:00:00.626) 0:01:56.507 ******** 2026-03-19 01:38:19.377031 | orchestrator | =============================================================================== 2026-03-19 01:38:19.377035 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-19 01:38:19.377038 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.06s 2026-03-19 01:38:19.377058 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.73s 2026-03-19 01:38:19.377062 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.35s 2026-03-19 01:38:19.377066 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-03-19 01:38:19.377069 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-03-19 01:38:19.377073 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.84s 2026-03-19 01:38:19.377077 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-03-19 01:38:19.377080 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-19 01:38:19.377084 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-19 01:38:19.377088 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-19 01:38:19.623282 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-19 01:38:19.623431 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-19 01:38:19.670522 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 01:38:19.670610 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-19 01:38:19.674846 | orchestrator | + set -e 2026-03-19 01:38:19.674923 | orchestrator | + NAMESPACE=kolla/release 2026-03-19 01:38:19.674942 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-19 01:38:19.680790 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-19 01:38:19.743156 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-19 01:38:19.743546 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-19 01:38:31.683291 | orchestrator | 2026-03-19 01:38:31 | INFO  | Task 6adcea62-691f-4f95-9a5d-2aec166bfe1f (operator) was prepared for execution. 2026-03-19 01:38:31.683462 | orchestrator | 2026-03-19 01:38:31 | INFO  | It takes a moment until task 6adcea62-691f-4f95-9a5d-2aec166bfe1f (operator) has been started and output is visible here. 2026-03-19 01:38:48.652333 | orchestrator | 2026-03-19 01:38:48.652536 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-19 01:38:48.652577 | orchestrator | 2026-03-19 01:38:48.652588 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 01:38:48.652597 | orchestrator | Thursday 19 March 2026 01:38:35 +0000 (0:00:00.135) 0:00:00.135 ******** 2026-03-19 01:38:48.652607 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:38:48.652617 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:38:48.652625 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:38:48.652634 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:38:48.652657 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:38:48.652675 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:38:48.652684 | orchestrator | 2026-03-19 01:38:48.652693 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-19 01:38:48.652701 | orchestrator | Thursday 19 March 2026 01:38:39 +0000 (0:00:03.377) 0:00:03.513 ******** 2026-03-19 01:38:48.652710 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:38:48.652719 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:38:48.652727 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:38:48.652751 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:38:48.652760 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:38:48.652768 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:38:48.652777 | orchestrator | 2026-03-19 01:38:48.652785 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-19 01:38:48.652794 | orchestrator | 2026-03-19 01:38:48.652803 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-19 01:38:48.652811 | orchestrator | Thursday 19 March 2026 01:38:39 +0000 (0:00:00.759) 0:00:04.273 ******** 2026-03-19 01:38:48.652820 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:38:48.652829 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:38:48.652837 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:38:48.652846 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:38:48.652854 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:38:48.652863 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:38:48.652873 | orchestrator | 2026-03-19 01:38:48.652883 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-19 01:38:48.652893 | orchestrator | Thursday 19 March 2026 01:38:40 +0000 (0:00:00.179) 0:00:04.452 ******** 2026-03-19 01:38:48.652904 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:38:48.652914 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:38:48.652923 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:38:48.652934 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:38:48.652944 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:38:48.652953 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:38:48.652961 | orchestrator | 2026-03-19 01:38:48.652970 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-19 01:38:48.652979 | orchestrator | Thursday 19 March 2026 01:38:40 +0000 (0:00:00.158) 0:00:04.610 ******** 2026-03-19 01:38:48.652987 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:48.652998 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:48.653006 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:48.653015 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:48.653024 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:48.653032 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:48.653041 | orchestrator | 2026-03-19 01:38:48.653049 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-19 01:38:48.653058 | orchestrator | Thursday 19 March 2026 01:38:40 +0000 (0:00:00.633) 0:00:05.244 ******** 2026-03-19 01:38:48.653067 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:48.653075 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:48.653084 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:48.653092 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:48.653101 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:48.653109 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:48.653118 | orchestrator | 2026-03-19 01:38:48.653126 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-19 01:38:48.653142 | orchestrator | Thursday 19 March 2026 01:38:41 +0000 (0:00:00.843) 0:00:06.088 ******** 2026-03-19 01:38:48.653151 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-19 01:38:48.653160 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-19 01:38:48.653169 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-19 01:38:48.653177 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-19 01:38:48.653186 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-19 01:38:48.653194 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-19 01:38:48.653203 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-19 01:38:48.653211 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-19 01:38:48.653220 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-19 01:38:48.653228 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-19 01:38:48.653237 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-19 01:38:48.653245 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-19 01:38:48.653254 | orchestrator | 2026-03-19 01:38:48.653263 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-19 01:38:48.653271 | orchestrator | Thursday 19 March 2026 01:38:43 +0000 (0:00:02.139) 0:00:08.227 ******** 2026-03-19 01:38:48.653280 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:48.653288 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:48.653297 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:48.653305 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:48.653314 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:48.653322 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:48.653331 | orchestrator | 2026-03-19 01:38:48.653340 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-19 01:38:48.653350 | orchestrator | Thursday 19 March 2026 01:38:45 +0000 (0:00:01.271) 0:00:09.499 ******** 2026-03-19 01:38:48.653358 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-19 01:38:48.653367 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-19 01:38:48.653402 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-19 01:38:48.653412 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653436 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653446 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653454 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653463 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653472 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-19 01:38:48.653480 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653489 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653498 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653506 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653515 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653524 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-19 01:38:48.653533 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653541 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653550 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653560 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653568 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653577 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-19 01:38:48.653592 | orchestrator | 2026-03-19 01:38:48.653601 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-19 01:38:48.653610 | orchestrator | Thursday 19 March 2026 01:38:46 +0000 (0:00:01.393) 0:00:10.893 ******** 2026-03-19 01:38:48.653619 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:48.653628 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:48.653636 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:48.653645 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:48.653653 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:48.653662 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:48.653670 | orchestrator | 2026-03-19 01:38:48.653679 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-19 01:38:48.653688 | orchestrator | Thursday 19 March 2026 01:38:46 +0000 (0:00:00.155) 0:00:11.049 ******** 2026-03-19 01:38:48.653696 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:48.653706 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:48.653720 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:48.653735 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:48.653748 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:48.653761 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:48.653775 | orchestrator | 2026-03-19 01:38:48.653790 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-19 01:38:48.653804 | orchestrator | Thursday 19 March 2026 01:38:46 +0000 (0:00:00.179) 0:00:11.228 ******** 2026-03-19 01:38:48.653819 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:48.653833 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:48.653848 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:48.653862 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:48.653878 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:48.653887 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:48.653896 | orchestrator | 2026-03-19 01:38:48.653905 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-19 01:38:48.653913 | orchestrator | Thursday 19 March 2026 01:38:47 +0000 (0:00:00.611) 0:00:11.840 ******** 2026-03-19 01:38:48.653922 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:48.653930 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:48.653939 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:48.653947 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:48.653956 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:48.653964 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:48.653973 | orchestrator | 2026-03-19 01:38:48.653981 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-19 01:38:48.653999 | orchestrator | Thursday 19 March 2026 01:38:47 +0000 (0:00:00.154) 0:00:11.994 ******** 2026-03-19 01:38:48.654008 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-19 01:38:48.654074 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:48.654086 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 01:38:48.654095 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:48.654103 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 01:38:48.654112 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:48.654120 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-19 01:38:48.654129 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:48.654137 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 01:38:48.654146 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:48.654154 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 01:38:48.654163 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:48.654171 | orchestrator | 2026-03-19 01:38:48.654180 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-19 01:38:48.654189 | orchestrator | Thursday 19 March 2026 01:38:48 +0000 (0:00:00.743) 0:00:12.738 ******** 2026-03-19 01:38:48.654227 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:48.654244 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:48.654253 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:48.654262 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:48.654270 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:48.654279 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:48.654287 | orchestrator | 2026-03-19 01:38:48.654296 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-19 01:38:48.654305 | orchestrator | Thursday 19 March 2026 01:38:48 +0000 (0:00:00.168) 0:00:12.907 ******** 2026-03-19 01:38:48.654314 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:48.654322 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:48.654331 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:48.654339 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:48.654358 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:49.976820 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:49.976931 | orchestrator | 2026-03-19 01:38:49.976949 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-19 01:38:49.976963 | orchestrator | Thursday 19 March 2026 01:38:48 +0000 (0:00:00.147) 0:00:13.055 ******** 2026-03-19 01:38:49.976974 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:49.976985 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:49.976996 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:49.977007 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:49.977018 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:49.977046 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:49.977067 | orchestrator | 2026-03-19 01:38:49.977078 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-19 01:38:49.977089 | orchestrator | Thursday 19 March 2026 01:38:48 +0000 (0:00:00.146) 0:00:13.202 ******** 2026-03-19 01:38:49.977100 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:38:49.977111 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:38:49.977144 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:38:49.977156 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:38:49.977167 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:38:49.977177 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:38:49.977188 | orchestrator | 2026-03-19 01:38:49.977199 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-19 01:38:49.977210 | orchestrator | Thursday 19 March 2026 01:38:49 +0000 (0:00:00.697) 0:00:13.899 ******** 2026-03-19 01:38:49.977221 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:38:49.977232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:38:49.977242 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:38:49.977254 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:38:49.977266 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:38:49.977276 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:38:49.977287 | orchestrator | 2026-03-19 01:38:49.977298 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:38:49.977310 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977323 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977334 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977344 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977356 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977370 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 01:38:49.977443 | orchestrator | 2026-03-19 01:38:49.977457 | orchestrator | 2026-03-19 01:38:49.977469 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:38:49.977483 | orchestrator | Thursday 19 March 2026 01:38:49 +0000 (0:00:00.231) 0:00:14.131 ******** 2026-03-19 01:38:49.977495 | orchestrator | =============================================================================== 2026-03-19 01:38:49.977507 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-03-19 01:38:49.977519 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 2.14s 2026-03-19 01:38:49.977532 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2026-03-19 01:38:49.977545 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-03-19 01:38:49.977557 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-03-19 01:38:49.977570 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-19 01:38:49.977582 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-03-19 01:38:49.977595 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-03-19 01:38:49.977607 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-03-19 01:38:49.977619 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-19 01:38:49.977631 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-19 01:38:49.977643 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-19 01:38:49.977656 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-03-19 01:38:49.977668 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-03-19 01:38:49.977680 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-19 01:38:49.977693 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-03-19 01:38:49.977706 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-19 01:38:49.977718 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-19 01:38:49.977729 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-19 01:38:50.236863 | orchestrator | + osism apply --environment custom facts 2026-03-19 01:38:52.117509 | orchestrator | 2026-03-19 01:38:52 | INFO  | Trying to run play facts in environment custom 2026-03-19 01:39:02.299776 | orchestrator | 2026-03-19 01:39:02 | INFO  | Task 938febc7-266b-476b-9e39-8acbc219d916 (facts) was prepared for execution. 2026-03-19 01:39:02.299963 | orchestrator | 2026-03-19 01:39:02 | INFO  | It takes a moment until task 938febc7-266b-476b-9e39-8acbc219d916 (facts) has been started and output is visible here. 2026-03-19 01:39:46.300179 | orchestrator | 2026-03-19 01:39:46.300371 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-19 01:39:46.300475 | orchestrator | 2026-03-19 01:39:46.300493 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 01:39:46.300513 | orchestrator | Thursday 19 March 2026 01:39:06 +0000 (0:00:00.061) 0:00:00.061 ******** 2026-03-19 01:39:46.300532 | orchestrator | ok: [testbed-manager] 2026-03-19 01:39:46.300553 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:39:46.300574 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:39:46.300593 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.300611 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.300622 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:39:46.300633 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.300645 | orchestrator | 2026-03-19 01:39:46.300656 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-19 01:39:46.300714 | orchestrator | Thursday 19 March 2026 01:39:07 +0000 (0:00:01.423) 0:00:01.484 ******** 2026-03-19 01:39:46.300736 | orchestrator | ok: [testbed-manager] 2026-03-19 01:39:46.300757 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:39:46.300778 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.300798 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:39:46.300817 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.300830 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:39:46.300842 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.300855 | orchestrator | 2026-03-19 01:39:46.300868 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-19 01:39:46.300881 | orchestrator | 2026-03-19 01:39:46.300894 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 01:39:46.300906 | orchestrator | Thursday 19 March 2026 01:39:08 +0000 (0:00:01.150) 0:00:02.634 ******** 2026-03-19 01:39:46.300919 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.300931 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.300944 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.300956 | orchestrator | 2026-03-19 01:39:46.300970 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 01:39:46.300984 | orchestrator | Thursday 19 March 2026 01:39:08 +0000 (0:00:00.072) 0:00:02.706 ******** 2026-03-19 01:39:46.300996 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.301009 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.301021 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.301033 | orchestrator | 2026-03-19 01:39:46.301045 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 01:39:46.301056 | orchestrator | Thursday 19 March 2026 01:39:08 +0000 (0:00:00.173) 0:00:02.880 ******** 2026-03-19 01:39:46.301067 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.301094 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.301115 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.301126 | orchestrator | 2026-03-19 01:39:46.301136 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 01:39:46.301148 | orchestrator | Thursday 19 March 2026 01:39:09 +0000 (0:00:00.183) 0:00:03.063 ******** 2026-03-19 01:39:46.301167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:39:46.301187 | orchestrator | 2026-03-19 01:39:46.301207 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 01:39:46.301227 | orchestrator | Thursday 19 March 2026 01:39:09 +0000 (0:00:00.122) 0:00:03.186 ******** 2026-03-19 01:39:46.301246 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.301265 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.301285 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.301305 | orchestrator | 2026-03-19 01:39:46.301325 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 01:39:46.301344 | orchestrator | Thursday 19 March 2026 01:39:09 +0000 (0:00:00.422) 0:00:03.608 ******** 2026-03-19 01:39:46.301362 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:39:46.301413 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:39:46.301431 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:39:46.301448 | orchestrator | 2026-03-19 01:39:46.301465 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 01:39:46.301485 | orchestrator | Thursday 19 March 2026 01:39:09 +0000 (0:00:00.098) 0:00:03.707 ******** 2026-03-19 01:39:46.301504 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.301523 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.301542 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.301559 | orchestrator | 2026-03-19 01:39:46.301575 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 01:39:46.301586 | orchestrator | Thursday 19 March 2026 01:39:10 +0000 (0:00:01.053) 0:00:04.760 ******** 2026-03-19 01:39:46.301610 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.301622 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.301632 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.301643 | orchestrator | 2026-03-19 01:39:46.301654 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 01:39:46.301664 | orchestrator | Thursday 19 March 2026 01:39:11 +0000 (0:00:00.464) 0:00:05.225 ******** 2026-03-19 01:39:46.301675 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.301686 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.301696 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.301707 | orchestrator | 2026-03-19 01:39:46.301771 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 01:39:46.301783 | orchestrator | Thursday 19 March 2026 01:39:12 +0000 (0:00:01.061) 0:00:06.286 ******** 2026-03-19 01:39:46.301794 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.301805 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.301816 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.301827 | orchestrator | 2026-03-19 01:39:46.301843 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-19 01:39:46.301861 | orchestrator | Thursday 19 March 2026 01:39:29 +0000 (0:00:16.712) 0:00:22.999 ******** 2026-03-19 01:39:46.301879 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:39:46.301899 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:39:46.301918 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:39:46.301938 | orchestrator | 2026-03-19 01:39:46.301957 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-19 01:39:46.302001 | orchestrator | Thursday 19 March 2026 01:39:29 +0000 (0:00:00.078) 0:00:23.077 ******** 2026-03-19 01:39:46.302076 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:39:46.302089 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:39:46.302100 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:39:46.302111 | orchestrator | 2026-03-19 01:39:46.302129 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-19 01:39:46.302141 | orchestrator | Thursday 19 March 2026 01:39:37 +0000 (0:00:08.058) 0:00:31.135 ******** 2026-03-19 01:39:46.302152 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.302171 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.302189 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.302207 | orchestrator | 2026-03-19 01:39:46.302224 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-19 01:39:46.302244 | orchestrator | Thursday 19 March 2026 01:39:37 +0000 (0:00:00.458) 0:00:31.593 ******** 2026-03-19 01:39:46.302263 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-19 01:39:46.302282 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-19 01:39:46.302295 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-19 01:39:46.302306 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-19 01:39:46.302317 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-19 01:39:46.302328 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-19 01:39:46.302338 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-19 01:39:46.302348 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-19 01:39:46.302359 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-19 01:39:46.302370 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-19 01:39:46.302411 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-19 01:39:46.302423 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-19 01:39:46.302433 | orchestrator | 2026-03-19 01:39:46.302444 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 01:39:46.302454 | orchestrator | Thursday 19 March 2026 01:39:41 +0000 (0:00:03.504) 0:00:35.097 ******** 2026-03-19 01:39:46.302475 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.302486 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.302497 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.302508 | orchestrator | 2026-03-19 01:39:46.302519 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 01:39:46.302529 | orchestrator | 2026-03-19 01:39:46.302540 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 01:39:46.302551 | orchestrator | Thursday 19 March 2026 01:39:42 +0000 (0:00:01.367) 0:00:36.465 ******** 2026-03-19 01:39:46.302562 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:39:46.302575 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:39:46.302593 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:39:46.302612 | orchestrator | ok: [testbed-manager] 2026-03-19 01:39:46.302630 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:39:46.302647 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:39:46.302662 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:39:46.302676 | orchestrator | 2026-03-19 01:39:46.302691 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:39:46.302709 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:39:46.302726 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:39:46.302744 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:39:46.302764 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:39:46.302782 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:39:46.302802 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:39:46.302821 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:39:46.302840 | orchestrator | 2026-03-19 01:39:46.302852 | orchestrator | 2026-03-19 01:39:46.302863 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:39:46.302874 | orchestrator | Thursday 19 March 2026 01:39:46 +0000 (0:00:03.808) 0:00:40.273 ******** 2026-03-19 01:39:46.302884 | orchestrator | =============================================================================== 2026-03-19 01:39:46.302895 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.71s 2026-03-19 01:39:46.302905 | orchestrator | Install required packages (Debian) -------------------------------------- 8.06s 2026-03-19 01:39:46.302916 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.81s 2026-03-19 01:39:46.302926 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2026-03-19 01:39:46.302937 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-03-19 01:39:46.302947 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-03-19 01:39:46.302970 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2026-03-19 01:39:46.510120 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-03-19 01:39:46.510208 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-03-19 01:39:46.510241 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-19 01:39:46.510246 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-19 01:39:46.510250 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-03-19 01:39:46.510274 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-03-19 01:39:46.510278 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-03-19 01:39:46.510283 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-03-19 01:39:46.510290 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-03-19 01:39:46.510296 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-03-19 01:39:46.510303 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-03-19 01:39:46.775082 | orchestrator | + osism apply bootstrap 2026-03-19 01:39:58.788957 | orchestrator | 2026-03-19 01:39:58 | INFO  | Task 0366ac63-91e9-4a79-b882-8245f8c9a795 (bootstrap) was prepared for execution. 2026-03-19 01:39:58.789081 | orchestrator | 2026-03-19 01:39:58 | INFO  | It takes a moment until task 0366ac63-91e9-4a79-b882-8245f8c9a795 (bootstrap) has been started and output is visible here. 2026-03-19 01:40:14.836347 | orchestrator | 2026-03-19 01:40:14.836519 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-19 01:40:14.836535 | orchestrator | 2026-03-19 01:40:14.836545 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-19 01:40:14.836555 | orchestrator | Thursday 19 March 2026 01:40:02 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-03-19 01:40:14.836565 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:14.836576 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:14.836586 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:14.836595 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:14.836604 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:14.836613 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:14.836622 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:14.836631 | orchestrator | 2026-03-19 01:40:14.836641 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 01:40:14.836651 | orchestrator | 2026-03-19 01:40:14.836660 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 01:40:14.836670 | orchestrator | Thursday 19 March 2026 01:40:03 +0000 (0:00:00.241) 0:00:00.393 ******** 2026-03-19 01:40:14.836679 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:14.836688 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:14.836697 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:14.836707 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:14.836716 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:14.836725 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:14.836734 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:14.836743 | orchestrator | 2026-03-19 01:40:14.836752 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-19 01:40:14.836762 | orchestrator | 2026-03-19 01:40:14.836771 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 01:40:14.836780 | orchestrator | Thursday 19 March 2026 01:40:06 +0000 (0:00:03.773) 0:00:04.167 ******** 2026-03-19 01:40:14.836791 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-19 01:40:14.836800 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-19 01:40:14.836810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-19 01:40:14.836819 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-19 01:40:14.836828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 01:40:14.836837 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-19 01:40:14.836847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-19 01:40:14.836856 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 01:40:14.836865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 01:40:14.836876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 01:40:14.836908 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-19 01:40:14.836920 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 01:40:14.836930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 01:40:14.836941 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-19 01:40:14.836952 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 01:40:14.836963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-19 01:40:14.836974 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:14.836985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 01:40:14.836995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 01:40:14.837015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 01:40:14.837026 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 01:40:14.837037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 01:40:14.837047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 01:40:14.837057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 01:40:14.837067 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:14.837078 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:14.837088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-19 01:40:14.837099 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 01:40:14.837109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 01:40:14.837119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 01:40:14.837130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-19 01:40:14.837140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 01:40:14.837150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 01:40:14.837160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 01:40:14.837170 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 01:40:14.837180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 01:40:14.837190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-19 01:40:14.837200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 01:40:14.837210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 01:40:14.837220 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 01:40:14.837229 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:14.837238 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 01:40:14.837247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 01:40:14.837255 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 01:40:14.837264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 01:40:14.837273 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:14.837282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 01:40:14.837308 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 01:40:14.837317 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 01:40:14.837326 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 01:40:14.837335 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 01:40:14.837344 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:14.837352 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 01:40:14.837401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 01:40:14.837418 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 01:40:14.837428 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:14.837443 | orchestrator | 2026-03-19 01:40:14.837452 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-19 01:40:14.837461 | orchestrator | 2026-03-19 01:40:14.837470 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-19 01:40:14.837479 | orchestrator | Thursday 19 March 2026 01:40:07 +0000 (0:00:00.456) 0:00:04.624 ******** 2026-03-19 01:40:14.837488 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:14.837496 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:14.837505 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:14.837514 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:14.837522 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:14.837531 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:14.837539 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:14.837548 | orchestrator | 2026-03-19 01:40:14.837557 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-19 01:40:14.837565 | orchestrator | Thursday 19 March 2026 01:40:08 +0000 (0:00:01.341) 0:00:05.965 ******** 2026-03-19 01:40:14.837574 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:14.837583 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:14.837591 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:14.837600 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:14.837608 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:14.837616 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:14.837625 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:14.837633 | orchestrator | 2026-03-19 01:40:14.837642 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-19 01:40:14.837651 | orchestrator | Thursday 19 March 2026 01:40:09 +0000 (0:00:01.127) 0:00:07.092 ******** 2026-03-19 01:40:14.837661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:14.837671 | orchestrator | 2026-03-19 01:40:14.837680 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-19 01:40:14.837689 | orchestrator | Thursday 19 March 2026 01:40:10 +0000 (0:00:00.281) 0:00:07.374 ******** 2026-03-19 01:40:14.837698 | orchestrator | changed: [testbed-manager] 2026-03-19 01:40:14.837706 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:14.837715 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:14.837724 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:14.837732 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:14.837741 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:14.837749 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:14.837758 | orchestrator | 2026-03-19 01:40:14.837767 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-19 01:40:14.837775 | orchestrator | Thursday 19 March 2026 01:40:12 +0000 (0:00:02.150) 0:00:09.524 ******** 2026-03-19 01:40:14.837784 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:14.837794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:14.837804 | orchestrator | 2026-03-19 01:40:14.837812 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-19 01:40:14.837821 | orchestrator | Thursday 19 March 2026 01:40:12 +0000 (0:00:00.260) 0:00:09.785 ******** 2026-03-19 01:40:14.837830 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:14.837839 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:14.837847 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:14.837856 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:14.837864 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:14.837873 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:14.837882 | orchestrator | 2026-03-19 01:40:14.837895 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-19 01:40:14.837909 | orchestrator | Thursday 19 March 2026 01:40:13 +0000 (0:00:01.055) 0:00:10.841 ******** 2026-03-19 01:40:14.837917 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:14.837926 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:14.837934 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:14.837943 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:14.837951 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:14.837960 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:14.837968 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:14.837977 | orchestrator | 2026-03-19 01:40:14.837985 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-19 01:40:14.837994 | orchestrator | Thursday 19 March 2026 01:40:14 +0000 (0:00:00.608) 0:00:11.449 ******** 2026-03-19 01:40:14.838002 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:14.838011 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:14.838100 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:14.838111 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:14.838119 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:14.838128 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:14.838136 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:14.838145 | orchestrator | 2026-03-19 01:40:14.838154 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-19 01:40:14.838163 | orchestrator | Thursday 19 March 2026 01:40:14 +0000 (0:00:00.420) 0:00:11.870 ******** 2026-03-19 01:40:14.838195 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:14.838204 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:14.838220 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:28.165058 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:28.165206 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:28.165232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:28.165251 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:28.165269 | orchestrator | 2026-03-19 01:40:28.165290 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-19 01:40:28.165311 | orchestrator | Thursday 19 March 2026 01:40:14 +0000 (0:00:00.217) 0:00:12.087 ******** 2026-03-19 01:40:28.165334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:28.165406 | orchestrator | 2026-03-19 01:40:28.165426 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-19 01:40:28.165447 | orchestrator | Thursday 19 March 2026 01:40:15 +0000 (0:00:00.278) 0:00:12.366 ******** 2026-03-19 01:40:28.165466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:28.165486 | orchestrator | 2026-03-19 01:40:28.165506 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-19 01:40:28.165525 | orchestrator | Thursday 19 March 2026 01:40:15 +0000 (0:00:00.299) 0:00:12.665 ******** 2026-03-19 01:40:28.165545 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.165565 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.165585 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.165607 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.165627 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.165650 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.165671 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.165691 | orchestrator | 2026-03-19 01:40:28.165718 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-19 01:40:28.165742 | orchestrator | Thursday 19 March 2026 01:40:17 +0000 (0:00:01.705) 0:00:14.371 ******** 2026-03-19 01:40:28.165762 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:28.165826 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:28.165849 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:28.165873 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:28.165895 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:28.165915 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:28.165935 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:28.165953 | orchestrator | 2026-03-19 01:40:28.165971 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-19 01:40:28.165990 | orchestrator | Thursday 19 March 2026 01:40:17 +0000 (0:00:00.216) 0:00:14.587 ******** 2026-03-19 01:40:28.166009 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.166103 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.166124 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.166144 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.166162 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.166180 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.166198 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.166217 | orchestrator | 2026-03-19 01:40:28.166235 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-19 01:40:28.166253 | orchestrator | Thursday 19 March 2026 01:40:18 +0000 (0:00:00.613) 0:00:15.201 ******** 2026-03-19 01:40:28.166273 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:28.166293 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:28.166312 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:28.166331 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:28.166350 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:28.166368 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:28.166428 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:28.166448 | orchestrator | 2026-03-19 01:40:28.166467 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-19 01:40:28.166487 | orchestrator | Thursday 19 March 2026 01:40:18 +0000 (0:00:00.353) 0:00:15.554 ******** 2026-03-19 01:40:28.166505 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.166523 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:28.166542 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:28.166561 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:28.166580 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:28.166616 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:28.166635 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:28.166654 | orchestrator | 2026-03-19 01:40:28.166673 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-19 01:40:28.166693 | orchestrator | Thursday 19 March 2026 01:40:18 +0000 (0:00:00.571) 0:00:16.125 ******** 2026-03-19 01:40:28.166713 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.166732 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:28.166752 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:28.166772 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:28.166792 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:28.166809 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:28.166829 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:28.166848 | orchestrator | 2026-03-19 01:40:28.166867 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-19 01:40:28.166886 | orchestrator | Thursday 19 March 2026 01:40:20 +0000 (0:00:01.213) 0:00:17.339 ******** 2026-03-19 01:40:28.166904 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.166922 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.166940 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.166960 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.166979 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.166999 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.167017 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.167036 | orchestrator | 2026-03-19 01:40:28.167056 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-19 01:40:28.167076 | orchestrator | Thursday 19 March 2026 01:40:21 +0000 (0:00:01.082) 0:00:18.421 ******** 2026-03-19 01:40:28.167145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:28.167168 | orchestrator | 2026-03-19 01:40:28.167187 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-19 01:40:28.167207 | orchestrator | Thursday 19 March 2026 01:40:21 +0000 (0:00:00.272) 0:00:18.694 ******** 2026-03-19 01:40:28.167225 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:28.167245 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:28.167266 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:40:28.167284 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:40:28.167302 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:28.167319 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:40:28.167339 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:28.167359 | orchestrator | 2026-03-19 01:40:28.167446 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-19 01:40:28.167466 | orchestrator | Thursday 19 March 2026 01:40:22 +0000 (0:00:01.342) 0:00:20.036 ******** 2026-03-19 01:40:28.167483 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.167501 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.167520 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.167539 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.167558 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.167577 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.167595 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.167614 | orchestrator | 2026-03-19 01:40:28.167634 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-19 01:40:28.167654 | orchestrator | Thursday 19 March 2026 01:40:23 +0000 (0:00:00.197) 0:00:20.234 ******** 2026-03-19 01:40:28.167674 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.167694 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.167713 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.167732 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.167751 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.167770 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.167788 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.167805 | orchestrator | 2026-03-19 01:40:28.167825 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-19 01:40:28.167844 | orchestrator | Thursday 19 March 2026 01:40:23 +0000 (0:00:00.230) 0:00:20.464 ******** 2026-03-19 01:40:28.167864 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.167883 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.167903 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.167921 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.167938 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.167955 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.167970 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.167987 | orchestrator | 2026-03-19 01:40:28.168005 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-19 01:40:28.168022 | orchestrator | Thursday 19 March 2026 01:40:23 +0000 (0:00:00.198) 0:00:20.662 ******** 2026-03-19 01:40:28.168039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:40:28.168059 | orchestrator | 2026-03-19 01:40:28.168076 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-19 01:40:28.168093 | orchestrator | Thursday 19 March 2026 01:40:23 +0000 (0:00:00.299) 0:00:20.962 ******** 2026-03-19 01:40:28.168110 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.168128 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.168144 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.168160 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.168189 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.168207 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.168224 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.168240 | orchestrator | 2026-03-19 01:40:28.168258 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-19 01:40:28.168275 | orchestrator | Thursday 19 March 2026 01:40:24 +0000 (0:00:00.581) 0:00:21.544 ******** 2026-03-19 01:40:28.168292 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:40:28.168308 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:40:28.168326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:40:28.168343 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:40:28.168359 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:40:28.168400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:40:28.168419 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:40:28.168436 | orchestrator | 2026-03-19 01:40:28.168454 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-19 01:40:28.168472 | orchestrator | Thursday 19 March 2026 01:40:24 +0000 (0:00:00.227) 0:00:21.771 ******** 2026-03-19 01:40:28.168489 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.168506 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.168522 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.168538 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.168555 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:28.168572 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:40:28.168589 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:40:28.168606 | orchestrator | 2026-03-19 01:40:28.168622 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-19 01:40:28.168638 | orchestrator | Thursday 19 March 2026 01:40:25 +0000 (0:00:01.149) 0:00:22.920 ******** 2026-03-19 01:40:28.168654 | orchestrator | ok: [testbed-manager] 2026-03-19 01:40:28.168672 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.168689 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.168705 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:40:28.168720 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.168737 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:40:28.168755 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:40:28.168772 | orchestrator | 2026-03-19 01:40:28.168790 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-19 01:40:28.168821 | orchestrator | Thursday 19 March 2026 01:40:26 +0000 (0:00:00.591) 0:00:23.512 ******** 2026-03-19 01:40:28.168840 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:40:28.168858 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:40:28.168873 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:40:28.168889 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:40:28.168921 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.156880 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.157008 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157025 | orchestrator | 2026-03-19 01:41:10.157037 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-19 01:41:10.157049 | orchestrator | Thursday 19 March 2026 01:40:28 +0000 (0:00:01.808) 0:00:25.321 ******** 2026-03-19 01:41:10.157059 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157069 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157078 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157088 | orchestrator | changed: [testbed-manager] 2026-03-19 01:41:10.157099 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.157110 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:41:10.157120 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.157130 | orchestrator | 2026-03-19 01:41:10.157140 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-19 01:41:10.157150 | orchestrator | Thursday 19 March 2026 01:40:45 +0000 (0:00:17.396) 0:00:42.717 ******** 2026-03-19 01:41:10.157161 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157171 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157181 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157218 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157248 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.157257 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.157276 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.157287 | orchestrator | 2026-03-19 01:41:10.157297 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-19 01:41:10.157306 | orchestrator | Thursday 19 March 2026 01:40:45 +0000 (0:00:00.194) 0:00:42.911 ******** 2026-03-19 01:41:10.157316 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157326 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157336 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157346 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157355 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.157365 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.157375 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.157401 | orchestrator | 2026-03-19 01:41:10.157411 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-19 01:41:10.157422 | orchestrator | Thursday 19 March 2026 01:40:45 +0000 (0:00:00.181) 0:00:43.093 ******** 2026-03-19 01:41:10.157433 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157443 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157457 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157470 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157483 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.157496 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.157509 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.157523 | orchestrator | 2026-03-19 01:41:10.157537 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-19 01:41:10.157546 | orchestrator | Thursday 19 March 2026 01:40:46 +0000 (0:00:00.184) 0:00:43.278 ******** 2026-03-19 01:41:10.157561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:41:10.157577 | orchestrator | 2026-03-19 01:41:10.157590 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-19 01:41:10.157604 | orchestrator | Thursday 19 March 2026 01:40:46 +0000 (0:00:00.245) 0:00:43.524 ******** 2026-03-19 01:41:10.157615 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157626 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157637 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.157648 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157658 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157669 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.157679 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.157690 | orchestrator | 2026-03-19 01:41:10.157699 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-19 01:41:10.157709 | orchestrator | Thursday 19 March 2026 01:40:48 +0000 (0:00:01.812) 0:00:45.336 ******** 2026-03-19 01:41:10.157719 | orchestrator | changed: [testbed-manager] 2026-03-19 01:41:10.157730 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:41:10.157740 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:41:10.157750 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:41:10.157761 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:41:10.157771 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.157781 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.157791 | orchestrator | 2026-03-19 01:41:10.157800 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-19 01:41:10.157826 | orchestrator | Thursday 19 March 2026 01:40:49 +0000 (0:00:01.139) 0:00:46.476 ******** 2026-03-19 01:41:10.157836 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.157845 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.157853 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.157864 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.157874 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.157894 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.157903 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.157912 | orchestrator | 2026-03-19 01:41:10.157923 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-19 01:41:10.157934 | orchestrator | Thursday 19 March 2026 01:40:50 +0000 (0:00:00.814) 0:00:47.290 ******** 2026-03-19 01:41:10.157947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:41:10.157959 | orchestrator | 2026-03-19 01:41:10.157969 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-19 01:41:10.157980 | orchestrator | Thursday 19 March 2026 01:40:50 +0000 (0:00:00.254) 0:00:47.545 ******** 2026-03-19 01:41:10.157989 | orchestrator | changed: [testbed-manager] 2026-03-19 01:41:10.157999 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:41:10.158009 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:41:10.158071 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:41:10.158082 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:41:10.158093 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.158103 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.158113 | orchestrator | 2026-03-19 01:41:10.158148 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-19 01:41:10.158161 | orchestrator | Thursday 19 March 2026 01:40:51 +0000 (0:00:00.997) 0:00:48.543 ******** 2026-03-19 01:41:10.158171 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:41:10.158181 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:41:10.158190 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:41:10.158200 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:41:10.158211 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:41:10.158221 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:41:10.158230 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:41:10.158241 | orchestrator | 2026-03-19 01:41:10.158251 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-19 01:41:10.158261 | orchestrator | Thursday 19 March 2026 01:40:51 +0000 (0:00:00.219) 0:00:48.762 ******** 2026-03-19 01:41:10.158272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:41:10.158284 | orchestrator | 2026-03-19 01:41:10.158294 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-19 01:41:10.158305 | orchestrator | Thursday 19 March 2026 01:40:51 +0000 (0:00:00.311) 0:00:49.073 ******** 2026-03-19 01:41:10.158316 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.158327 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.158337 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.158348 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.158358 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.158365 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.158371 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.158377 | orchestrator | 2026-03-19 01:41:10.158430 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-19 01:41:10.158437 | orchestrator | Thursday 19 March 2026 01:40:54 +0000 (0:00:02.176) 0:00:51.249 ******** 2026-03-19 01:41:10.158444 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:41:10.158450 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:41:10.158456 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:41:10.158462 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.158468 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:41:10.158474 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.158480 | orchestrator | changed: [testbed-manager] 2026-03-19 01:41:10.158486 | orchestrator | 2026-03-19 01:41:10.158493 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-19 01:41:10.158509 | orchestrator | Thursday 19 March 2026 01:40:55 +0000 (0:00:01.917) 0:00:53.167 ******** 2026-03-19 01:41:10.158515 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:41:10.158521 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:41:10.158527 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:41:10.158534 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:41:10.158540 | orchestrator | changed: [testbed-manager] 2026-03-19 01:41:10.158546 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:41:10.158552 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:41:10.158558 | orchestrator | 2026-03-19 01:41:10.158564 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-19 01:41:10.158570 | orchestrator | Thursday 19 March 2026 01:41:07 +0000 (0:00:11.497) 0:01:04.664 ******** 2026-03-19 01:41:10.158576 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.158583 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.158590 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.158597 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.158604 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.158611 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.158618 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.158625 | orchestrator | 2026-03-19 01:41:10.158632 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-19 01:41:10.158640 | orchestrator | Thursday 19 March 2026 01:41:08 +0000 (0:00:01.153) 0:01:05.818 ******** 2026-03-19 01:41:10.158647 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.158654 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.158661 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.158668 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.158675 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.158682 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.158689 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.158696 | orchestrator | 2026-03-19 01:41:10.158703 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-19 01:41:10.158717 | orchestrator | Thursday 19 March 2026 01:41:09 +0000 (0:00:00.906) 0:01:06.724 ******** 2026-03-19 01:41:10.158724 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.158731 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.158738 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.158745 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.158752 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.158759 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.158766 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.158773 | orchestrator | 2026-03-19 01:41:10.158780 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-19 01:41:10.158788 | orchestrator | Thursday 19 March 2026 01:41:09 +0000 (0:00:00.174) 0:01:06.899 ******** 2026-03-19 01:41:10.158796 | orchestrator | ok: [testbed-manager] 2026-03-19 01:41:10.158803 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:41:10.158809 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:41:10.158816 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:41:10.158823 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:41:10.158830 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:41:10.158836 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:41:10.158844 | orchestrator | 2026-03-19 01:41:10.158851 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-19 01:41:10.158858 | orchestrator | Thursday 19 March 2026 01:41:09 +0000 (0:00:00.174) 0:01:07.073 ******** 2026-03-19 01:41:10.158866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:41:10.158875 | orchestrator | 2026-03-19 01:41:10.158890 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-19 01:44:25.202456 | orchestrator | Thursday 19 March 2026 01:41:10 +0000 (0:00:00.246) 0:01:07.320 ******** 2026-03-19 01:44:25.202681 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.202700 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.202714 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.202728 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.202742 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.202756 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.202770 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.202783 | orchestrator | 2026-03-19 01:44:25.202798 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-19 01:44:25.202813 | orchestrator | Thursday 19 March 2026 01:41:11 +0000 (0:00:01.810) 0:01:09.131 ******** 2026-03-19 01:44:25.202826 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:25.202842 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:25.202856 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:25.202870 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:25.202884 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:25.202898 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:25.202911 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:25.202925 | orchestrator | 2026-03-19 01:44:25.202939 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-19 01:44:25.202954 | orchestrator | Thursday 19 March 2026 01:41:12 +0000 (0:00:00.562) 0:01:09.693 ******** 2026-03-19 01:44:25.202968 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.202983 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.202997 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203012 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203027 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203040 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.203054 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.203067 | orchestrator | 2026-03-19 01:44:25.203082 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-19 01:44:25.203098 | orchestrator | Thursday 19 March 2026 01:41:12 +0000 (0:00:00.194) 0:01:09.888 ******** 2026-03-19 01:44:25.203113 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.203127 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.203141 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203155 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.203169 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203183 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203197 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.203211 | orchestrator | 2026-03-19 01:44:25.203225 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-19 01:44:25.203240 | orchestrator | Thursday 19 March 2026 01:41:14 +0000 (0:00:01.497) 0:01:11.385 ******** 2026-03-19 01:44:25.203252 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:25.203265 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:25.203278 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:25.203289 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:25.203301 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:25.203314 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:25.203326 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:25.203338 | orchestrator | 2026-03-19 01:44:25.203351 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-19 01:44:25.203381 | orchestrator | Thursday 19 March 2026 01:41:16 +0000 (0:00:02.205) 0:01:13.591 ******** 2026-03-19 01:44:25.203395 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.203407 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.203420 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.203431 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203443 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203455 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203467 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.203501 | orchestrator | 2026-03-19 01:44:25.203514 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-19 01:44:25.203526 | orchestrator | Thursday 19 March 2026 01:41:19 +0000 (0:00:02.953) 0:01:16.545 ******** 2026-03-19 01:44:25.203552 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.203565 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.203577 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.203589 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203601 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.203613 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203625 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203638 | orchestrator | 2026-03-19 01:44:25.203650 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-19 01:44:25.203662 | orchestrator | Thursday 19 March 2026 01:42:48 +0000 (0:01:29.577) 0:02:46.123 ******** 2026-03-19 01:44:25.203676 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:25.203689 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:25.203702 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:25.203716 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:25.203728 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:25.203740 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:25.203752 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:25.203766 | orchestrator | 2026-03-19 01:44:25.203780 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-19 01:44:25.203792 | orchestrator | Thursday 19 March 2026 01:44:10 +0000 (0:01:21.166) 0:04:07.290 ******** 2026-03-19 01:44:25.203805 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:25.203818 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203831 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203845 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.203857 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.203871 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203884 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.203898 | orchestrator | 2026-03-19 01:44:25.203912 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-19 01:44:25.203927 | orchestrator | Thursday 19 March 2026 01:44:12 +0000 (0:00:01.945) 0:04:09.235 ******** 2026-03-19 01:44:25.203940 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:25.203952 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:25.203965 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:25.203978 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:25.203991 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:25.204003 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:25.204016 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:25.204028 | orchestrator | 2026-03-19 01:44:25.204040 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-19 01:44:25.204052 | orchestrator | Thursday 19 March 2026 01:44:23 +0000 (0:00:11.059) 0:04:20.294 ******** 2026-03-19 01:44:25.204113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-19 01:44:25.204156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-19 01:44:25.204174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-19 01:44:25.204202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-19 01:44:25.204217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-19 01:44:25.204230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-19 01:44:25.204242 | orchestrator | 2026-03-19 01:44:25.204254 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-19 01:44:25.204267 | orchestrator | Thursday 19 March 2026 01:44:23 +0000 (0:00:00.343) 0:04:20.638 ******** 2026-03-19 01:44:25.204280 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 01:44:25.204293 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 01:44:25.204305 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:25.204318 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 01:44:25.204330 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:25.204343 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:25.204363 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-19 01:44:25.204375 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:25.204383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 01:44:25.204391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 01:44:25.204399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 01:44:25.204406 | orchestrator | 2026-03-19 01:44:25.204414 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-19 01:44:25.204422 | orchestrator | Thursday 19 March 2026 01:44:25 +0000 (0:00:01.624) 0:04:22.262 ******** 2026-03-19 01:44:25.204430 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 01:44:25.204439 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 01:44:25.204447 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 01:44:25.204454 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 01:44:25.204462 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 01:44:25.204507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 01:44:32.095893 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 01:44:32.096010 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 01:44:32.096027 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 01:44:32.096066 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 01:44:32.096078 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 01:44:32.096088 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 01:44:32.096099 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 01:44:32.096110 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 01:44:32.096120 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 01:44:32.096131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 01:44:32.096142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 01:44:32.096153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 01:44:32.096164 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 01:44:32.096174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 01:44:32.096185 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 01:44:32.096196 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 01:44:32.096206 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 01:44:32.096217 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 01:44:32.096228 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 01:44:32.096238 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 01:44:32.096249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 01:44:32.096260 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:32.096272 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 01:44:32.096283 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 01:44:32.096293 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 01:44:32.096304 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-19 01:44:32.096315 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:32.096326 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-19 01:44:32.096336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-19 01:44:32.096347 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-19 01:44:32.096358 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:32.096385 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-19 01:44:32.096398 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-19 01:44:32.096411 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-19 01:44:32.096423 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-19 01:44:32.096436 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-19 01:44:32.096448 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-19 01:44:32.096468 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:32.096481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 01:44:32.096493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 01:44:32.096557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 01:44:32.096571 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 01:44:32.096584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 01:44:32.096616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 01:44:32.096629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-19 01:44:32.096642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096654 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-19 01:44:32.096679 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096703 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-19 01:44:32.096716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 01:44:32.096728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 01:44:32.096740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096753 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 01:44:32.096765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 01:44:32.096776 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 01:44:32.096787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-19 01:44:32.096797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 01:44:32.096808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 01:44:32.096818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-19 01:44:32.096829 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 01:44:32.096839 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 01:44:32.096850 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-19 01:44:32.096861 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 01:44:32.096872 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-19 01:44:32.096882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-19 01:44:32.096892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-19 01:44:32.096904 | orchestrator | 2026-03-19 01:44:32.096916 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-19 01:44:32.096934 | orchestrator | Thursday 19 March 2026 01:44:30 +0000 (0:00:05.893) 0:04:28.155 ******** 2026-03-19 01:44:32.096945 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.096956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.096966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.096977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.096993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.097005 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.097015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-19 01:44:32.097026 | orchestrator | 2026-03-19 01:44:32.097037 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-19 01:44:32.097047 | orchestrator | Thursday 19 March 2026 01:44:31 +0000 (0:00:00.613) 0:04:28.769 ******** 2026-03-19 01:44:32.097058 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:32.097069 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:32.097079 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:32.097090 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:44:32.097101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:32.097111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:44:32.097122 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:32.097133 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:44:32.097143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:32.097154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:32.097171 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:46.134997 | orchestrator | 2026-03-19 01:44:46.135148 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-19 01:44:46.135167 | orchestrator | Thursday 19 March 2026 01:44:32 +0000 (0:00:00.485) 0:04:29.255 ******** 2026-03-19 01:44:46.135179 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:46.135191 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:46.135203 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:46.135215 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:46.135226 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:46.135237 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-19 01:44:46.135248 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:46.135259 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:46.135269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:46.135280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:46.135291 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-19 01:44:46.135301 | orchestrator | 2026-03-19 01:44:46.135312 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-19 01:44:46.135324 | orchestrator | Thursday 19 March 2026 01:44:32 +0000 (0:00:00.602) 0:04:29.858 ******** 2026-03-19 01:44:46.135361 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 01:44:46.135373 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:46.135384 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 01:44:46.135395 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 01:44:46.135406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:44:46.135417 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:44:46.135433 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-19 01:44:46.135454 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:44:46.135482 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 01:44:46.135500 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 01:44:46.135518 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-19 01:44:46.135536 | orchestrator | 2026-03-19 01:44:46.135554 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-19 01:44:46.135666 | orchestrator | Thursday 19 March 2026 01:44:33 +0000 (0:00:00.547) 0:04:30.405 ******** 2026-03-19 01:44:46.135688 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:46.135706 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:46.135724 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:46.135743 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:46.135761 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:44:46.135779 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:44:46.135797 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:44:46.135817 | orchestrator | 2026-03-19 01:44:46.135836 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-19 01:44:46.135856 | orchestrator | Thursday 19 March 2026 01:44:33 +0000 (0:00:00.316) 0:04:30.722 ******** 2026-03-19 01:44:46.135875 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:46.135894 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:46.135913 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:46.135931 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:46.135950 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:46.135969 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:46.135988 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:46.136007 | orchestrator | 2026-03-19 01:44:46.136026 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-19 01:44:46.136047 | orchestrator | Thursday 19 March 2026 01:44:39 +0000 (0:00:06.063) 0:04:36.785 ******** 2026-03-19 01:44:46.136066 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-19 01:44:46.136087 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-19 01:44:46.136114 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:46.136132 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:46.136150 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-19 01:44:46.136168 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-19 01:44:46.136185 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:46.136202 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-19 01:44:46.136220 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:46.136268 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-19 01:44:46.136288 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:44:46.136307 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:44:46.136325 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-19 01:44:46.136338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:44:46.136348 | orchestrator | 2026-03-19 01:44:46.136359 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-19 01:44:46.136385 | orchestrator | Thursday 19 March 2026 01:44:39 +0000 (0:00:00.274) 0:04:37.059 ******** 2026-03-19 01:44:46.136396 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-19 01:44:46.136407 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-19 01:44:46.136418 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-19 01:44:46.136451 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-19 01:44:46.136463 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-19 01:44:46.136474 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-19 01:44:46.136484 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-19 01:44:46.136495 | orchestrator | 2026-03-19 01:44:46.136506 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-19 01:44:46.136517 | orchestrator | Thursday 19 March 2026 01:44:40 +0000 (0:00:01.082) 0:04:38.142 ******** 2026-03-19 01:44:46.136530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:44:46.136543 | orchestrator | 2026-03-19 01:44:46.136554 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-19 01:44:46.136594 | orchestrator | Thursday 19 March 2026 01:44:41 +0000 (0:00:00.464) 0:04:38.607 ******** 2026-03-19 01:44:46.136607 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:46.136618 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:46.136629 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:46.136639 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:46.136650 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:46.136661 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:46.136671 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:46.136682 | orchestrator | 2026-03-19 01:44:46.136692 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-19 01:44:46.136703 | orchestrator | Thursday 19 March 2026 01:44:42 +0000 (0:00:01.547) 0:04:40.155 ******** 2026-03-19 01:44:46.136714 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:46.136725 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:46.136735 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:46.136746 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:46.136756 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:46.136767 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:46.136777 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:46.136788 | orchestrator | 2026-03-19 01:44:46.136799 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-19 01:44:46.136809 | orchestrator | Thursday 19 March 2026 01:44:43 +0000 (0:00:00.655) 0:04:40.810 ******** 2026-03-19 01:44:46.136820 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:46.136831 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:46.136842 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:46.136852 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:46.136863 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:46.136873 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:46.136884 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:46.136894 | orchestrator | 2026-03-19 01:44:46.136905 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-19 01:44:46.136916 | orchestrator | Thursday 19 March 2026 01:44:44 +0000 (0:00:00.678) 0:04:41.488 ******** 2026-03-19 01:44:46.136929 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:46.136952 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:46.136979 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:46.136997 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:46.137015 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:46.137032 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:46.137050 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:46.137068 | orchestrator | 2026-03-19 01:44:46.137085 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-19 01:44:46.137102 | orchestrator | Thursday 19 March 2026 01:44:45 +0000 (0:00:00.732) 0:04:42.221 ******** 2026-03-19 01:44:46.137149 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883189.983284, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:46.137173 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883225.6505606, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:46.137193 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883218.2893429, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:46.137247 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883210.8847513, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074646 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883224.6884906, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074759 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883216.4225771, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074773 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773883220.8597865, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074814 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074853 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074870 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074886 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074932 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074951 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074966 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 01:44:51.074996 | orchestrator | 2026-03-19 01:44:51.075015 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-19 01:44:51.075035 | orchestrator | Thursday 19 March 2026 01:44:46 +0000 (0:00:01.071) 0:04:43.292 ******** 2026-03-19 01:44:51.075051 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:51.075068 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:51.075078 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:51.075087 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:51.075097 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:51.075107 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:51.075117 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:51.075126 | orchestrator | 2026-03-19 01:44:51.075136 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-19 01:44:51.075146 | orchestrator | Thursday 19 March 2026 01:44:47 +0000 (0:00:01.179) 0:04:44.471 ******** 2026-03-19 01:44:51.075157 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:51.075168 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:51.075179 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:51.075190 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:51.075201 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:51.075212 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:51.075223 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:51.075234 | orchestrator | 2026-03-19 01:44:51.075251 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-19 01:44:51.075263 | orchestrator | Thursday 19 March 2026 01:44:48 +0000 (0:00:01.196) 0:04:45.668 ******** 2026-03-19 01:44:51.075274 | orchestrator | changed: [testbed-manager] 2026-03-19 01:44:51.075285 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:44:51.075301 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:44:51.075318 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:44:51.075334 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:44:51.075350 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:44:51.075367 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:44:51.075383 | orchestrator | 2026-03-19 01:44:51.075399 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-19 01:44:51.075414 | orchestrator | Thursday 19 March 2026 01:44:49 +0000 (0:00:01.181) 0:04:46.850 ******** 2026-03-19 01:44:51.075431 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:44:51.075448 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:44:51.075465 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:44:51.075482 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:44:51.075497 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:44:51.075513 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:44:51.075528 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:44:51.075545 | orchestrator | 2026-03-19 01:44:51.075562 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-19 01:44:51.075579 | orchestrator | Thursday 19 March 2026 01:44:49 +0000 (0:00:00.259) 0:04:47.109 ******** 2026-03-19 01:44:51.075621 | orchestrator | ok: [testbed-manager] 2026-03-19 01:44:51.075633 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:44:51.075643 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:44:51.075653 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:44:51.075662 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:44:51.075671 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:44:51.075681 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:44:51.075690 | orchestrator | 2026-03-19 01:44:51.075700 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-19 01:44:51.075710 | orchestrator | Thursday 19 March 2026 01:44:50 +0000 (0:00:00.763) 0:04:47.873 ******** 2026-03-19 01:44:51.075722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:44:51.075744 | orchestrator | 2026-03-19 01:44:51.075754 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-19 01:44:51.075774 | orchestrator | Thursday 19 March 2026 01:44:51 +0000 (0:00:00.362) 0:04:48.236 ******** 2026-03-19 01:46:09.977677 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.978613 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:09.978648 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:09.978660 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:09.978671 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:09.978681 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:09.978690 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:09.978700 | orchestrator | 2026-03-19 01:46:09.978712 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-19 01:46:09.978723 | orchestrator | Thursday 19 March 2026 01:44:59 +0000 (0:00:08.672) 0:04:56.908 ******** 2026-03-19 01:46:09.978733 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.978743 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.978753 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.978762 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.978772 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.978781 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.978791 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.978800 | orchestrator | 2026-03-19 01:46:09.978810 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-19 01:46:09.978820 | orchestrator | Thursday 19 March 2026 01:45:01 +0000 (0:00:01.481) 0:04:58.389 ******** 2026-03-19 01:46:09.978829 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.978839 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.978848 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.978858 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.978867 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.978877 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.978886 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.978895 | orchestrator | 2026-03-19 01:46:09.978941 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-19 01:46:09.978952 | orchestrator | Thursday 19 March 2026 01:45:02 +0000 (0:00:01.195) 0:04:59.584 ******** 2026-03-19 01:46:09.978962 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.978975 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.978992 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.979009 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.979025 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.979041 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.979057 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.979077 | orchestrator | 2026-03-19 01:46:09.979095 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-19 01:46:09.979113 | orchestrator | Thursday 19 March 2026 01:45:02 +0000 (0:00:00.290) 0:04:59.875 ******** 2026-03-19 01:46:09.979129 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.979144 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.979160 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.979176 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.979191 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.979206 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.979220 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.979233 | orchestrator | 2026-03-19 01:46:09.979248 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-19 01:46:09.979264 | orchestrator | Thursday 19 March 2026 01:45:03 +0000 (0:00:00.314) 0:05:00.189 ******** 2026-03-19 01:46:09.979280 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.979296 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.979310 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.979325 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.979340 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.979389 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.979405 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.979421 | orchestrator | 2026-03-19 01:46:09.979437 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-19 01:46:09.979452 | orchestrator | Thursday 19 March 2026 01:45:03 +0000 (0:00:00.295) 0:05:00.485 ******** 2026-03-19 01:46:09.979467 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.979482 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.979498 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.979513 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.979526 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.979541 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.979556 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.979571 | orchestrator | 2026-03-19 01:46:09.979585 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-19 01:46:09.979601 | orchestrator | Thursday 19 March 2026 01:45:09 +0000 (0:00:05.731) 0:05:06.216 ******** 2026-03-19 01:46:09.979618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:46:09.979638 | orchestrator | 2026-03-19 01:46:09.979655 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-19 01:46:09.979673 | orchestrator | Thursday 19 March 2026 01:45:09 +0000 (0:00:00.438) 0:05:06.655 ******** 2026-03-19 01:46:09.979689 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979704 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-19 01:46:09.979720 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979735 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-19 01:46:09.979752 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:09.979792 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979809 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-19 01:46:09.979824 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:09.979839 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979854 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-19 01:46:09.979869 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:09.979886 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:09.979930 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979948 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-19 01:46:09.979962 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.979977 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-19 01:46:09.980022 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:09.980040 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:09.980056 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-19 01:46:09.980073 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-19 01:46:09.980089 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:09.980105 | orchestrator | 2026-03-19 01:46:09.980121 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-19 01:46:09.980137 | orchestrator | Thursday 19 March 2026 01:45:09 +0000 (0:00:00.389) 0:05:07.045 ******** 2026-03-19 01:46:09.980152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:46:09.980169 | orchestrator | 2026-03-19 01:46:09.980186 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-19 01:46:09.980201 | orchestrator | Thursday 19 March 2026 01:45:10 +0000 (0:00:00.421) 0:05:07.466 ******** 2026-03-19 01:46:09.980232 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-19 01:46:09.980248 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-19 01:46:09.980263 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:09.980277 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-19 01:46:09.980292 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:09.980307 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-19 01:46:09.980323 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:09.980338 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-19 01:46:09.980353 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:09.980368 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-19 01:46:09.980384 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:09.980400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:09.980416 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-19 01:46:09.980432 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:09.980448 | orchestrator | 2026-03-19 01:46:09.980464 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-19 01:46:09.980479 | orchestrator | Thursday 19 March 2026 01:45:10 +0000 (0:00:00.305) 0:05:07.772 ******** 2026-03-19 01:46:09.980497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:46:09.980513 | orchestrator | 2026-03-19 01:46:09.980529 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-19 01:46:09.980545 | orchestrator | Thursday 19 March 2026 01:45:11 +0000 (0:00:00.417) 0:05:08.189 ******** 2026-03-19 01:46:09.980561 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:09.980578 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:09.980594 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:09.980612 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:09.980640 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:09.980657 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:09.980672 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:09.980689 | orchestrator | 2026-03-19 01:46:09.980707 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-19 01:46:09.980724 | orchestrator | Thursday 19 March 2026 01:45:44 +0000 (0:00:33.826) 0:05:42.016 ******** 2026-03-19 01:46:09.980741 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:09.980759 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:09.980775 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:09.980791 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:09.980809 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:09.980829 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:09.980848 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:09.980865 | orchestrator | 2026-03-19 01:46:09.980884 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-19 01:46:09.980930 | orchestrator | Thursday 19 March 2026 01:45:53 +0000 (0:00:08.836) 0:05:50.852 ******** 2026-03-19 01:46:09.980950 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:09.980967 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:09.980986 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:09.981003 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:09.981020 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:09.981038 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:09.981053 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:09.981068 | orchestrator | 2026-03-19 01:46:09.981087 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-19 01:46:09.981105 | orchestrator | Thursday 19 March 2026 01:46:01 +0000 (0:00:08.209) 0:05:59.061 ******** 2026-03-19 01:46:09.981139 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:09.981157 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:09.981175 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:09.981191 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:09.981207 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:09.981222 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:09.981238 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:09.981255 | orchestrator | 2026-03-19 01:46:09.981272 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-19 01:46:09.981291 | orchestrator | Thursday 19 March 2026 01:46:03 +0000 (0:00:01.818) 0:06:00.880 ******** 2026-03-19 01:46:09.981308 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:09.981325 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:09.981341 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:09.981357 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:09.981374 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:09.981389 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:09.981404 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:09.981421 | orchestrator | 2026-03-19 01:46:09.981457 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-19 01:46:21.113399 | orchestrator | Thursday 19 March 2026 01:46:09 +0000 (0:00:06.248) 0:06:07.129 ******** 2026-03-19 01:46:21.113522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:46:21.113540 | orchestrator | 2026-03-19 01:46:21.113553 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-19 01:46:21.113566 | orchestrator | Thursday 19 March 2026 01:46:10 +0000 (0:00:00.585) 0:06:07.715 ******** 2026-03-19 01:46:21.113577 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:21.113589 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:21.113600 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:21.113611 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:21.113621 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:21.113632 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:21.113643 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:21.113654 | orchestrator | 2026-03-19 01:46:21.113665 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-19 01:46:21.113676 | orchestrator | Thursday 19 March 2026 01:46:11 +0000 (0:00:00.716) 0:06:08.431 ******** 2026-03-19 01:46:21.113686 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:21.113698 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:21.113709 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:21.113720 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:21.113730 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:21.113741 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:21.113752 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:21.113762 | orchestrator | 2026-03-19 01:46:21.113773 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-19 01:46:21.113784 | orchestrator | Thursday 19 March 2026 01:46:13 +0000 (0:00:01.756) 0:06:10.188 ******** 2026-03-19 01:46:21.113795 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:46:21.113806 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:46:21.113817 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:46:21.113827 | orchestrator | changed: [testbed-manager] 2026-03-19 01:46:21.113838 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:46:21.113849 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:46:21.113860 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:46:21.113871 | orchestrator | 2026-03-19 01:46:21.113882 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-19 01:46:21.113893 | orchestrator | Thursday 19 March 2026 01:46:13 +0000 (0:00:00.766) 0:06:10.955 ******** 2026-03-19 01:46:21.113904 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.113920 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.114000 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.114083 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:21.114098 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:21.114121 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:21.114133 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:21.114146 | orchestrator | 2026-03-19 01:46:21.114159 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-19 01:46:21.114171 | orchestrator | Thursday 19 March 2026 01:46:14 +0000 (0:00:00.272) 0:06:11.227 ******** 2026-03-19 01:46:21.114183 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.114196 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.114225 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.114238 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:21.114250 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:21.114262 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:21.114274 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:21.114287 | orchestrator | 2026-03-19 01:46:21.114298 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-19 01:46:21.114309 | orchestrator | Thursday 19 March 2026 01:46:14 +0000 (0:00:00.410) 0:06:11.637 ******** 2026-03-19 01:46:21.114320 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:21.114331 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:21.114342 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:21.114352 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:21.114363 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:21.114374 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:21.114384 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:21.114395 | orchestrator | 2026-03-19 01:46:21.114405 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-19 01:46:21.114416 | orchestrator | Thursday 19 March 2026 01:46:14 +0000 (0:00:00.285) 0:06:11.922 ******** 2026-03-19 01:46:21.114427 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.114438 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.114448 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.114459 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:21.114469 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:21.114480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:21.114490 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:21.114501 | orchestrator | 2026-03-19 01:46:21.114512 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-19 01:46:21.114524 | orchestrator | Thursday 19 March 2026 01:46:15 +0000 (0:00:00.267) 0:06:12.189 ******** 2026-03-19 01:46:21.114534 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:21.114545 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:21.114556 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:21.114566 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:21.114577 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:21.114587 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:21.114598 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:21.114609 | orchestrator | 2026-03-19 01:46:21.114619 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-19 01:46:21.114630 | orchestrator | Thursday 19 March 2026 01:46:15 +0000 (0:00:00.284) 0:06:12.474 ******** 2026-03-19 01:46:21.114641 | orchestrator | ok: [testbed-manager] =>  2026-03-19 01:46:21.114652 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114662 | orchestrator | ok: [testbed-node-3] =>  2026-03-19 01:46:21.114673 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114684 | orchestrator | ok: [testbed-node-4] =>  2026-03-19 01:46:21.114694 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114705 | orchestrator | ok: [testbed-node-5] =>  2026-03-19 01:46:21.114716 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114745 | orchestrator | ok: [testbed-node-0] =>  2026-03-19 01:46:21.114756 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114777 | orchestrator | ok: [testbed-node-1] =>  2026-03-19 01:46:21.114788 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114798 | orchestrator | ok: [testbed-node-2] =>  2026-03-19 01:46:21.114809 | orchestrator |  docker_version: 5:27.5.1 2026-03-19 01:46:21.114820 | orchestrator | 2026-03-19 01:46:21.114830 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-19 01:46:21.114841 | orchestrator | Thursday 19 March 2026 01:46:15 +0000 (0:00:00.285) 0:06:12.759 ******** 2026-03-19 01:46:21.114852 | orchestrator | ok: [testbed-manager] =>  2026-03-19 01:46:21.114863 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.114873 | orchestrator | ok: [testbed-node-3] =>  2026-03-19 01:46:21.114884 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.114895 | orchestrator | ok: [testbed-node-4] =>  2026-03-19 01:46:21.114906 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.114916 | orchestrator | ok: [testbed-node-5] =>  2026-03-19 01:46:21.114927 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.114937 | orchestrator | ok: [testbed-node-0] =>  2026-03-19 01:46:21.115007 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.115019 | orchestrator | ok: [testbed-node-1] =>  2026-03-19 01:46:21.115029 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.115040 | orchestrator | ok: [testbed-node-2] =>  2026-03-19 01:46:21.115051 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-19 01:46:21.115061 | orchestrator | 2026-03-19 01:46:21.115072 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-19 01:46:21.115084 | orchestrator | Thursday 19 March 2026 01:46:15 +0000 (0:00:00.287) 0:06:13.047 ******** 2026-03-19 01:46:21.115094 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.115105 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.115115 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.115126 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:21.115137 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:21.115147 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:21.115158 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:21.115168 | orchestrator | 2026-03-19 01:46:21.115179 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-19 01:46:21.115190 | orchestrator | Thursday 19 March 2026 01:46:16 +0000 (0:00:00.287) 0:06:13.334 ******** 2026-03-19 01:46:21.115201 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.115211 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.115222 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.115232 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:46:21.115243 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:46:21.115254 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:46:21.115264 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:46:21.115275 | orchestrator | 2026-03-19 01:46:21.115285 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-19 01:46:21.115296 | orchestrator | Thursday 19 March 2026 01:46:16 +0000 (0:00:00.303) 0:06:13.638 ******** 2026-03-19 01:46:21.115309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:46:21.115322 | orchestrator | 2026-03-19 01:46:21.115340 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-19 01:46:21.115351 | orchestrator | Thursday 19 March 2026 01:46:16 +0000 (0:00:00.425) 0:06:14.063 ******** 2026-03-19 01:46:21.115362 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:21.115373 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:21.115384 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:21.115394 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:21.115405 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:21.115416 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:21.115426 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:21.115444 | orchestrator | 2026-03-19 01:46:21.115455 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-19 01:46:21.115466 | orchestrator | Thursday 19 March 2026 01:46:17 +0000 (0:00:00.974) 0:06:15.037 ******** 2026-03-19 01:46:21.115477 | orchestrator | ok: [testbed-manager] 2026-03-19 01:46:21.115487 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:46:21.115498 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:46:21.115508 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:46:21.115518 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:46:21.115529 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:46:21.115539 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:46:21.115550 | orchestrator | 2026-03-19 01:46:21.115561 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-19 01:46:21.115573 | orchestrator | Thursday 19 March 2026 01:46:20 +0000 (0:00:02.865) 0:06:17.903 ******** 2026-03-19 01:46:21.115584 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-19 01:46:21.115595 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-19 01:46:21.115606 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-19 01:46:21.115616 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-19 01:46:21.115627 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-19 01:46:21.115638 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-19 01:46:21.115649 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:46:21.115659 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-19 01:46:21.115670 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-19 01:46:21.115681 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-19 01:46:21.115691 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:46:21.115702 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-19 01:46:21.115714 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-19 01:46:21.115733 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-19 01:46:21.115753 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:46:21.115782 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-19 01:46:21.115811 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-19 01:47:27.383709 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-19 01:47:27.384682 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:27.384718 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-19 01:47:27.384731 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-19 01:47:27.384743 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-19 01:47:27.384754 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:27.384765 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:27.384776 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-19 01:47:27.384787 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-19 01:47:27.384798 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-19 01:47:27.384809 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:27.384821 | orchestrator | 2026-03-19 01:47:27.384833 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-19 01:47:27.384845 | orchestrator | Thursday 19 March 2026 01:46:21 +0000 (0:00:00.571) 0:06:18.474 ******** 2026-03-19 01:47:27.384857 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.384868 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.384879 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.384889 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.384900 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.384912 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.384928 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.384947 | orchestrator | 2026-03-19 01:47:27.384966 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-19 01:47:27.385020 | orchestrator | Thursday 19 March 2026 01:46:29 +0000 (0:00:07.882) 0:06:26.356 ******** 2026-03-19 01:47:27.385042 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.385061 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385077 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385088 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385099 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385109 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385120 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385131 | orchestrator | 2026-03-19 01:47:27.385142 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-19 01:47:27.385153 | orchestrator | Thursday 19 March 2026 01:46:30 +0000 (0:00:01.072) 0:06:27.428 ******** 2026-03-19 01:47:27.385164 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.385204 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385216 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385227 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385238 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385248 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385259 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385270 | orchestrator | 2026-03-19 01:47:27.385281 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-19 01:47:27.385292 | orchestrator | Thursday 19 March 2026 01:46:38 +0000 (0:00:08.647) 0:06:36.076 ******** 2026-03-19 01:47:27.385303 | orchestrator | changed: [testbed-manager] 2026-03-19 01:47:27.385313 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385324 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385335 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385345 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385356 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385367 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385378 | orchestrator | 2026-03-19 01:47:27.385389 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-19 01:47:27.385400 | orchestrator | Thursday 19 March 2026 01:46:42 +0000 (0:00:03.777) 0:06:39.854 ******** 2026-03-19 01:47:27.385411 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.385422 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385432 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385443 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385454 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385464 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385475 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385486 | orchestrator | 2026-03-19 01:47:27.385496 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-19 01:47:27.385507 | orchestrator | Thursday 19 March 2026 01:46:44 +0000 (0:00:01.384) 0:06:41.238 ******** 2026-03-19 01:47:27.385518 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.385529 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385540 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385550 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385561 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385571 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385582 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385593 | orchestrator | 2026-03-19 01:47:27.385604 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-19 01:47:27.385615 | orchestrator | Thursday 19 March 2026 01:46:45 +0000 (0:00:01.641) 0:06:42.879 ******** 2026-03-19 01:47:27.385626 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:27.385637 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:27.385647 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:27.385658 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:27.385668 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:27.385679 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:27.385700 | orchestrator | changed: [testbed-manager] 2026-03-19 01:47:27.385711 | orchestrator | 2026-03-19 01:47:27.385722 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-19 01:47:27.385733 | orchestrator | Thursday 19 March 2026 01:46:46 +0000 (0:00:00.643) 0:06:43.522 ******** 2026-03-19 01:47:27.385744 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.385754 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385765 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385790 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385801 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385823 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385834 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385845 | orchestrator | 2026-03-19 01:47:27.385856 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-19 01:47:27.385890 | orchestrator | Thursday 19 March 2026 01:46:57 +0000 (0:00:11.167) 0:06:54.690 ******** 2026-03-19 01:47:27.385902 | orchestrator | changed: [testbed-manager] 2026-03-19 01:47:27.385913 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.385923 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.385934 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.385945 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.385955 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.385966 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.385976 | orchestrator | 2026-03-19 01:47:27.385987 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-19 01:47:27.385998 | orchestrator | Thursday 19 March 2026 01:46:58 +0000 (0:00:00.924) 0:06:55.615 ******** 2026-03-19 01:47:27.386009 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.386078 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.386090 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.386101 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.386112 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.386123 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.386220 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.386241 | orchestrator | 2026-03-19 01:47:27.386260 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-19 01:47:27.386277 | orchestrator | Thursday 19 March 2026 01:47:08 +0000 (0:00:09.588) 0:07:05.203 ******** 2026-03-19 01:47:27.386288 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.386299 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.386310 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.386320 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.386331 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.386342 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.386352 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.386363 | orchestrator | 2026-03-19 01:47:27.386374 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-19 01:47:27.386384 | orchestrator | Thursday 19 March 2026 01:47:20 +0000 (0:00:12.369) 0:07:17.573 ******** 2026-03-19 01:47:27.386395 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-19 01:47:27.386406 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-19 01:47:27.386417 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-19 01:47:27.386428 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-19 01:47:27.386438 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-19 01:47:27.386449 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-19 01:47:27.386459 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-19 01:47:27.386470 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-19 01:47:27.386518 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-19 01:47:27.386530 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-19 01:47:27.386596 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-19 01:47:27.386621 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-19 01:47:27.386632 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-19 01:47:27.386643 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-19 01:47:27.386654 | orchestrator | 2026-03-19 01:47:27.386664 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-19 01:47:27.386675 | orchestrator | Thursday 19 March 2026 01:47:21 +0000 (0:00:01.275) 0:07:18.849 ******** 2026-03-19 01:47:27.386691 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:27.386702 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:27.386713 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:27.386724 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:27.386735 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:27.386746 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:27.386757 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:27.386767 | orchestrator | 2026-03-19 01:47:27.386778 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-19 01:47:27.386789 | orchestrator | Thursday 19 March 2026 01:47:22 +0000 (0:00:00.508) 0:07:19.358 ******** 2026-03-19 01:47:27.386800 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:27.386811 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:27.386822 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:27.386832 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:27.386843 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:27.386853 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:27.386864 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:27.386875 | orchestrator | 2026-03-19 01:47:27.386886 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-19 01:47:27.386898 | orchestrator | Thursday 19 March 2026 01:47:26 +0000 (0:00:04.247) 0:07:23.605 ******** 2026-03-19 01:47:27.386909 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:27.386925 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:27.386943 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:27.386960 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:27.386978 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:27.386996 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:27.387015 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:27.387026 | orchestrator | 2026-03-19 01:47:27.387038 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-19 01:47:27.387049 | orchestrator | Thursday 19 March 2026 01:47:26 +0000 (0:00:00.457) 0:07:24.063 ******** 2026-03-19 01:47:27.387061 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-19 01:47:27.387072 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-19 01:47:27.387083 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:27.387093 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-19 01:47:27.387104 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-19 01:47:27.387114 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:27.387125 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-19 01:47:27.387135 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-19 01:47:27.387146 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:27.387171 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-19 01:47:46.487611 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-19 01:47:46.487757 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:46.487775 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-19 01:47:46.487787 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-19 01:47:46.487798 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:46.487809 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-19 01:47:46.487845 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-19 01:47:46.487857 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:46.487867 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-19 01:47:46.487878 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-19 01:47:46.487889 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:46.487900 | orchestrator | 2026-03-19 01:47:46.487912 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-19 01:47:46.487924 | orchestrator | Thursday 19 March 2026 01:47:27 +0000 (0:00:00.748) 0:07:24.812 ******** 2026-03-19 01:47:46.487935 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:46.487946 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:46.487956 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:46.487967 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:46.487978 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:46.487988 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:46.487999 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:46.488009 | orchestrator | 2026-03-19 01:47:46.488021 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-19 01:47:46.488035 | orchestrator | Thursday 19 March 2026 01:47:28 +0000 (0:00:00.479) 0:07:25.291 ******** 2026-03-19 01:47:46.488054 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:46.488071 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:46.488090 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:46.488108 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:46.488128 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:46.488146 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:46.488165 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:46.488183 | orchestrator | 2026-03-19 01:47:46.488202 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-19 01:47:46.488223 | orchestrator | Thursday 19 March 2026 01:47:28 +0000 (0:00:00.478) 0:07:25.769 ******** 2026-03-19 01:47:46.488309 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:46.488330 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:47:46.488350 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:47:46.488369 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:47:46.488390 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:47:46.488409 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:47:46.488426 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:47:46.488440 | orchestrator | 2026-03-19 01:47:46.488453 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-19 01:47:46.488466 | orchestrator | Thursday 19 March 2026 01:47:29 +0000 (0:00:00.461) 0:07:26.231 ******** 2026-03-19 01:47:46.488478 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.488490 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.488500 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.488511 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.488522 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.488532 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.488543 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.488553 | orchestrator | 2026-03-19 01:47:46.488564 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-19 01:47:46.488575 | orchestrator | Thursday 19 March 2026 01:47:30 +0000 (0:00:01.932) 0:07:28.163 ******** 2026-03-19 01:47:46.488587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:47:46.488600 | orchestrator | 2026-03-19 01:47:46.488611 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-19 01:47:46.488621 | orchestrator | Thursday 19 March 2026 01:47:31 +0000 (0:00:00.814) 0:07:28.978 ******** 2026-03-19 01:47:46.488648 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.488689 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:46.488701 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:46.488711 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:46.488722 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:46.488733 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:46.488743 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:46.488754 | orchestrator | 2026-03-19 01:47:46.488764 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-19 01:47:46.488775 | orchestrator | Thursday 19 March 2026 01:47:32 +0000 (0:00:00.806) 0:07:29.784 ******** 2026-03-19 01:47:46.488786 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.488796 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:46.488807 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:46.488817 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:46.488828 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:46.488845 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:46.488863 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:46.488883 | orchestrator | 2026-03-19 01:47:46.488900 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-19 01:47:46.488920 | orchestrator | Thursday 19 March 2026 01:47:33 +0000 (0:00:00.828) 0:07:30.613 ******** 2026-03-19 01:47:46.488938 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.488957 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:46.488975 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:46.488994 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:46.489013 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:46.489031 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:46.489049 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:46.489068 | orchestrator | 2026-03-19 01:47:46.489088 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-19 01:47:46.489133 | orchestrator | Thursday 19 March 2026 01:47:35 +0000 (0:00:01.571) 0:07:32.185 ******** 2026-03-19 01:47:46.489156 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:47:46.489175 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.489194 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.489213 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.489261 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.489282 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.489299 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.489316 | orchestrator | 2026-03-19 01:47:46.489335 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-19 01:47:46.489354 | orchestrator | Thursday 19 March 2026 01:47:36 +0000 (0:00:01.478) 0:07:33.663 ******** 2026-03-19 01:47:46.489373 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.489391 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:46.489410 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:46.489429 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:46.489448 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:46.489466 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:46.489484 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:46.489505 | orchestrator | 2026-03-19 01:47:46.489524 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-19 01:47:46.489540 | orchestrator | Thursday 19 March 2026 01:47:37 +0000 (0:00:01.334) 0:07:34.997 ******** 2026-03-19 01:47:46.489559 | orchestrator | changed: [testbed-manager] 2026-03-19 01:47:46.489576 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:47:46.489594 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:47:46.489611 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:47:46.489629 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:47:46.489648 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:47:46.489665 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:47:46.489684 | orchestrator | 2026-03-19 01:47:46.489697 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-19 01:47:46.489708 | orchestrator | Thursday 19 March 2026 01:47:39 +0000 (0:00:01.556) 0:07:36.554 ******** 2026-03-19 01:47:46.489735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:47:46.489752 | orchestrator | 2026-03-19 01:47:46.489771 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-19 01:47:46.489789 | orchestrator | Thursday 19 March 2026 01:47:40 +0000 (0:00:01.024) 0:07:37.579 ******** 2026-03-19 01:47:46.489805 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.489820 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.489837 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.489853 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.489871 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.489886 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.489902 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.489918 | orchestrator | 2026-03-19 01:47:46.489935 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-19 01:47:46.489951 | orchestrator | Thursday 19 March 2026 01:47:41 +0000 (0:00:01.336) 0:07:38.916 ******** 2026-03-19 01:47:46.489968 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.489983 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.489999 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.490130 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.490176 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.490197 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.490215 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.490258 | orchestrator | 2026-03-19 01:47:46.490281 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-19 01:47:46.490301 | orchestrator | Thursday 19 March 2026 01:47:42 +0000 (0:00:01.115) 0:07:40.032 ******** 2026-03-19 01:47:46.490321 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.490342 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.490361 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.490381 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.490405 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.490424 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.490443 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.490464 | orchestrator | 2026-03-19 01:47:46.490483 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-19 01:47:46.490503 | orchestrator | Thursday 19 March 2026 01:47:43 +0000 (0:00:01.105) 0:07:41.137 ******** 2026-03-19 01:47:46.490523 | orchestrator | ok: [testbed-manager] 2026-03-19 01:47:46.490543 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:47:46.490562 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:47:46.490578 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:47:46.490591 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:47:46.490603 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:47:46.490614 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:47:46.490626 | orchestrator | 2026-03-19 01:47:46.490638 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-19 01:47:46.490652 | orchestrator | Thursday 19 March 2026 01:47:45 +0000 (0:00:01.358) 0:07:42.495 ******** 2026-03-19 01:47:46.490666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:47:46.490678 | orchestrator | 2026-03-19 01:47:46.490690 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:47:46.490703 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.854) 0:07:43.350 ******** 2026-03-19 01:47:46.490715 | orchestrator | 2026-03-19 01:47:46.490727 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:47:46.490740 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.039) 0:07:43.389 ******** 2026-03-19 01:47:46.490765 | orchestrator | 2026-03-19 01:47:46.490778 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:47:46.490790 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.038) 0:07:43.427 ******** 2026-03-19 01:47:46.490802 | orchestrator | 2026-03-19 01:47:46.490815 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:47:46.490846 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.046) 0:07:43.474 ******** 2026-03-19 01:48:12.613808 | orchestrator | 2026-03-19 01:48:12.613954 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:48:12.613972 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.040) 0:07:43.514 ******** 2026-03-19 01:48:12.613984 | orchestrator | 2026-03-19 01:48:12.613996 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:48:12.614007 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.040) 0:07:43.554 ******** 2026-03-19 01:48:12.614066 | orchestrator | 2026-03-19 01:48:12.614079 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-19 01:48:12.614090 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.045) 0:07:43.599 ******** 2026-03-19 01:48:12.614101 | orchestrator | 2026-03-19 01:48:12.614164 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-19 01:48:12.614178 | orchestrator | Thursday 19 March 2026 01:47:46 +0000 (0:00:00.039) 0:07:43.639 ******** 2026-03-19 01:48:12.614189 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:12.614202 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:12.614213 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:12.614224 | orchestrator | 2026-03-19 01:48:12.614235 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-19 01:48:12.614246 | orchestrator | Thursday 19 March 2026 01:47:47 +0000 (0:00:01.356) 0:07:44.996 ******** 2026-03-19 01:48:12.614257 | orchestrator | changed: [testbed-manager] 2026-03-19 01:48:12.614269 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:12.614280 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:12.614291 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:12.614303 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:12.614338 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:12.614351 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:12.614364 | orchestrator | 2026-03-19 01:48:12.614377 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-19 01:48:12.614390 | orchestrator | Thursday 19 March 2026 01:47:49 +0000 (0:00:01.515) 0:07:46.511 ******** 2026-03-19 01:48:12.614403 | orchestrator | changed: [testbed-manager] 2026-03-19 01:48:12.614416 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:12.614429 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:12.614441 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:12.614457 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:12.614476 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:12.614495 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:12.614513 | orchestrator | 2026-03-19 01:48:12.614545 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-19 01:48:12.614563 | orchestrator | Thursday 19 March 2026 01:47:50 +0000 (0:00:01.194) 0:07:47.706 ******** 2026-03-19 01:48:12.614582 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:12.614600 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:12.614617 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:12.614634 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:12.614653 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:12.614672 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:12.614690 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:12.614708 | orchestrator | 2026-03-19 01:48:12.614726 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-19 01:48:12.614745 | orchestrator | Thursday 19 March 2026 01:47:53 +0000 (0:00:02.556) 0:07:50.263 ******** 2026-03-19 01:48:12.614788 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:12.614831 | orchestrator | 2026-03-19 01:48:12.614842 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-19 01:48:12.614854 | orchestrator | Thursday 19 March 2026 01:47:53 +0000 (0:00:00.092) 0:07:50.355 ******** 2026-03-19 01:48:12.614865 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.614876 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:12.614886 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:12.614897 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:12.614907 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:12.614918 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:12.614928 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:12.614939 | orchestrator | 2026-03-19 01:48:12.614950 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-19 01:48:12.614962 | orchestrator | Thursday 19 March 2026 01:47:54 +0000 (0:00:00.916) 0:07:51.272 ******** 2026-03-19 01:48:12.614973 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:12.614983 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:12.614994 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:12.615004 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:12.615015 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:12.615025 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:12.615036 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:12.615046 | orchestrator | 2026-03-19 01:48:12.615057 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-19 01:48:12.615068 | orchestrator | Thursday 19 March 2026 01:47:54 +0000 (0:00:00.468) 0:07:51.741 ******** 2026-03-19 01:48:12.615080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:48:12.615093 | orchestrator | 2026-03-19 01:48:12.615104 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-19 01:48:12.615115 | orchestrator | Thursday 19 March 2026 01:47:55 +0000 (0:00:00.983) 0:07:52.725 ******** 2026-03-19 01:48:12.615125 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.615136 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:12.615147 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:12.615157 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:12.615168 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:12.615178 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:12.615189 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:12.615200 | orchestrator | 2026-03-19 01:48:12.615211 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-19 01:48:12.615222 | orchestrator | Thursday 19 March 2026 01:47:56 +0000 (0:00:00.834) 0:07:53.560 ******** 2026-03-19 01:48:12.615233 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-19 01:48:12.615264 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-19 01:48:12.615277 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-19 01:48:12.615287 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-19 01:48:12.615298 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-19 01:48:12.615354 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-19 01:48:12.615369 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-19 01:48:12.615380 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-19 01:48:12.615391 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-19 01:48:12.615401 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-19 01:48:12.615412 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-19 01:48:12.615423 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-19 01:48:12.615434 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-19 01:48:12.615455 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-19 01:48:12.615466 | orchestrator | 2026-03-19 01:48:12.615476 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-19 01:48:12.615487 | orchestrator | Thursday 19 March 2026 01:47:58 +0000 (0:00:02.526) 0:07:56.086 ******** 2026-03-19 01:48:12.615498 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:12.615509 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:12.615525 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:12.615543 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:12.615561 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:12.615579 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:12.615596 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:12.615610 | orchestrator | 2026-03-19 01:48:12.615628 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-19 01:48:12.615646 | orchestrator | Thursday 19 March 2026 01:47:59 +0000 (0:00:00.661) 0:07:56.748 ******** 2026-03-19 01:48:12.615668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:48:12.615688 | orchestrator | 2026-03-19 01:48:12.615708 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-19 01:48:12.615719 | orchestrator | Thursday 19 March 2026 01:48:00 +0000 (0:00:00.805) 0:07:57.553 ******** 2026-03-19 01:48:12.615731 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.615741 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:12.615752 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:12.615763 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:12.615774 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:12.615785 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:12.615796 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:12.615806 | orchestrator | 2026-03-19 01:48:12.615817 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-19 01:48:12.615836 | orchestrator | Thursday 19 March 2026 01:48:01 +0000 (0:00:00.882) 0:07:58.436 ******** 2026-03-19 01:48:12.615847 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.615858 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:12.615868 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:12.615879 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:12.615890 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:12.615900 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:12.615910 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:12.615921 | orchestrator | 2026-03-19 01:48:12.615932 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-19 01:48:12.615943 | orchestrator | Thursday 19 March 2026 01:48:02 +0000 (0:00:01.004) 0:07:59.440 ******** 2026-03-19 01:48:12.615954 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:12.615964 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:12.615975 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:12.615986 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:12.615997 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:12.616007 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:12.616018 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:12.616029 | orchestrator | 2026-03-19 01:48:12.616039 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-19 01:48:12.616050 | orchestrator | Thursday 19 March 2026 01:48:02 +0000 (0:00:00.465) 0:07:59.906 ******** 2026-03-19 01:48:12.616061 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.616071 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:12.616082 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:12.616092 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:12.616103 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:12.616114 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:12.616130 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:12.616160 | orchestrator | 2026-03-19 01:48:12.616178 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-19 01:48:12.616195 | orchestrator | Thursday 19 March 2026 01:48:04 +0000 (0:00:01.579) 0:08:01.486 ******** 2026-03-19 01:48:12.616213 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:12.616231 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:12.616250 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:12.616269 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:12.616288 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:12.616304 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:12.616393 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:12.616406 | orchestrator | 2026-03-19 01:48:12.616417 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-19 01:48:12.616428 | orchestrator | Thursday 19 March 2026 01:48:04 +0000 (0:00:00.447) 0:08:01.934 ******** 2026-03-19 01:48:12.616439 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:12.616449 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:12.616460 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:12.616471 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:12.616482 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:12.616492 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:12.616512 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:46.206303 | orchestrator | 2026-03-19 01:48:46.206490 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-19 01:48:46.206511 | orchestrator | Thursday 19 March 2026 01:48:12 +0000 (0:00:07.833) 0:08:09.767 ******** 2026-03-19 01:48:46.206521 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.206531 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:46.206541 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:46.206550 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:46.206559 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:46.206567 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:46.206576 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:46.206585 | orchestrator | 2026-03-19 01:48:46.206594 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-19 01:48:46.206603 | orchestrator | Thursday 19 March 2026 01:48:14 +0000 (0:00:01.608) 0:08:11.376 ******** 2026-03-19 01:48:46.206612 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.206620 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:46.206629 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:46.206637 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:46.206646 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:46.206655 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:46.206663 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:46.206672 | orchestrator | 2026-03-19 01:48:46.206680 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-19 01:48:46.206689 | orchestrator | Thursday 19 March 2026 01:48:16 +0000 (0:00:01.811) 0:08:13.187 ******** 2026-03-19 01:48:46.206698 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.206706 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:46.206715 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:46.206723 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:46.206732 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:46.206740 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:46.206749 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:46.206758 | orchestrator | 2026-03-19 01:48:46.206766 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 01:48:46.206775 | orchestrator | Thursday 19 March 2026 01:48:17 +0000 (0:00:01.624) 0:08:14.812 ******** 2026-03-19 01:48:46.206784 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.206792 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.206801 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.206809 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.206818 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.206853 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.206864 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.206874 | orchestrator | 2026-03-19 01:48:46.206885 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 01:48:46.206894 | orchestrator | Thursday 19 March 2026 01:48:18 +0000 (0:00:00.817) 0:08:15.629 ******** 2026-03-19 01:48:46.206904 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:46.206914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:46.206925 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:46.206935 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:46.206945 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:46.206954 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:46.206965 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:46.206975 | orchestrator | 2026-03-19 01:48:46.206990 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-19 01:48:46.207005 | orchestrator | Thursday 19 March 2026 01:48:19 +0000 (0:00:00.790) 0:08:16.419 ******** 2026-03-19 01:48:46.207028 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:46.207042 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:46.207055 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:46.207070 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:46.207084 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:46.207098 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:46.207112 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:46.207125 | orchestrator | 2026-03-19 01:48:46.207138 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-19 01:48:46.207175 | orchestrator | Thursday 19 March 2026 01:48:19 +0000 (0:00:00.428) 0:08:16.847 ******** 2026-03-19 01:48:46.207189 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207204 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207217 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207232 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207245 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207259 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207273 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207288 | orchestrator | 2026-03-19 01:48:46.207303 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-19 01:48:46.207318 | orchestrator | Thursday 19 March 2026 01:48:20 +0000 (0:00:00.466) 0:08:17.314 ******** 2026-03-19 01:48:46.207332 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207346 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207361 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207370 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207379 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207388 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207396 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207426 | orchestrator | 2026-03-19 01:48:46.207436 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-19 01:48:46.207445 | orchestrator | Thursday 19 March 2026 01:48:20 +0000 (0:00:00.419) 0:08:17.734 ******** 2026-03-19 01:48:46.207453 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207462 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207470 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207479 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207487 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207496 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207504 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207512 | orchestrator | 2026-03-19 01:48:46.207521 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-19 01:48:46.207530 | orchestrator | Thursday 19 March 2026 01:48:21 +0000 (0:00:00.548) 0:08:18.283 ******** 2026-03-19 01:48:46.207539 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207547 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207555 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207564 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207582 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207591 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207599 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207609 | orchestrator | 2026-03-19 01:48:46.207647 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-19 01:48:46.207662 | orchestrator | Thursday 19 March 2026 01:48:27 +0000 (0:00:06.089) 0:08:24.373 ******** 2026-03-19 01:48:46.207671 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:48:46.207680 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:48:46.207689 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:48:46.207697 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:48:46.207706 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:48:46.207714 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:48:46.207723 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:48:46.207731 | orchestrator | 2026-03-19 01:48:46.207740 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-19 01:48:46.207748 | orchestrator | Thursday 19 March 2026 01:48:27 +0000 (0:00:00.500) 0:08:24.873 ******** 2026-03-19 01:48:46.207759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:48:46.207769 | orchestrator | 2026-03-19 01:48:46.207778 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-19 01:48:46.207787 | orchestrator | Thursday 19 March 2026 01:48:28 +0000 (0:00:00.981) 0:08:25.854 ******** 2026-03-19 01:48:46.207795 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207804 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207812 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207821 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207829 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207838 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207846 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207854 | orchestrator | 2026-03-19 01:48:46.207863 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-19 01:48:46.207871 | orchestrator | Thursday 19 March 2026 01:48:30 +0000 (0:00:02.194) 0:08:28.049 ******** 2026-03-19 01:48:46.207880 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207888 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207897 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207905 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207914 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.207922 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.207931 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.207939 | orchestrator | 2026-03-19 01:48:46.207947 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-19 01:48:46.207956 | orchestrator | Thursday 19 March 2026 01:48:32 +0000 (0:00:01.145) 0:08:29.195 ******** 2026-03-19 01:48:46.207965 | orchestrator | ok: [testbed-manager] 2026-03-19 01:48:46.207973 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:48:46.207981 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:48:46.207990 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:48:46.207998 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:48:46.208007 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:48:46.208015 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:48:46.208023 | orchestrator | 2026-03-19 01:48:46.208032 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-19 01:48:46.208041 | orchestrator | Thursday 19 March 2026 01:48:32 +0000 (0:00:00.832) 0:08:30.027 ******** 2026-03-19 01:48:46.208056 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208066 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208083 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208091 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208100 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208109 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208117 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-19 01:48:46.208126 | orchestrator | 2026-03-19 01:48:46.208134 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-19 01:48:46.208143 | orchestrator | Thursday 19 March 2026 01:48:34 +0000 (0:00:01.892) 0:08:31.919 ******** 2026-03-19 01:48:46.208152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:48:46.208161 | orchestrator | 2026-03-19 01:48:46.208169 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-19 01:48:46.208178 | orchestrator | Thursday 19 March 2026 01:48:35 +0000 (0:00:00.755) 0:08:32.675 ******** 2026-03-19 01:48:46.208187 | orchestrator | changed: [testbed-manager] 2026-03-19 01:48:46.208195 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:48:46.208204 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:48:46.208213 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:48:46.208221 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:48:46.208230 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:48:46.208238 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:48:46.208247 | orchestrator | 2026-03-19 01:48:46.208262 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-19 01:49:17.165168 | orchestrator | Thursday 19 March 2026 01:48:46 +0000 (0:00:10.683) 0:08:43.359 ******** 2026-03-19 01:49:17.165263 | orchestrator | ok: [testbed-manager] 2026-03-19 01:49:17.165273 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:17.165279 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:17.165285 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:17.165291 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:17.165296 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:17.165301 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:17.165306 | orchestrator | 2026-03-19 01:49:17.165312 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-19 01:49:17.165318 | orchestrator | Thursday 19 March 2026 01:48:48 +0000 (0:00:01.980) 0:08:45.339 ******** 2026-03-19 01:49:17.165323 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:17.165328 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:17.165333 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:17.165338 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:17.165343 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:17.165348 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:17.165354 | orchestrator | 2026-03-19 01:49:17.165359 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-19 01:49:17.165364 | orchestrator | Thursday 19 March 2026 01:48:49 +0000 (0:00:01.296) 0:08:46.635 ******** 2026-03-19 01:49:17.165369 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.165375 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.165380 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.165385 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.165390 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.165395 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.165400 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.165425 | orchestrator | 2026-03-19 01:49:17.165431 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-19 01:49:17.165436 | orchestrator | 2026-03-19 01:49:17.165441 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-19 01:49:17.165446 | orchestrator | Thursday 19 March 2026 01:48:50 +0000 (0:00:01.336) 0:08:47.972 ******** 2026-03-19 01:49:17.165452 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:49:17.165459 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:49:17.165467 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:49:17.165479 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:49:17.165558 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:49:17.165567 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:49:17.165576 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:49:17.165584 | orchestrator | 2026-03-19 01:49:17.165592 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-19 01:49:17.165601 | orchestrator | 2026-03-19 01:49:17.165609 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-19 01:49:17.165618 | orchestrator | Thursday 19 March 2026 01:48:51 +0000 (0:00:00.634) 0:08:48.606 ******** 2026-03-19 01:49:17.165627 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.165635 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.165643 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.165651 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.165659 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.165667 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.165675 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.165684 | orchestrator | 2026-03-19 01:49:17.165693 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-19 01:49:17.165718 | orchestrator | Thursday 19 March 2026 01:48:52 +0000 (0:00:01.414) 0:08:50.020 ******** 2026-03-19 01:49:17.165728 | orchestrator | ok: [testbed-manager] 2026-03-19 01:49:17.165738 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:17.165747 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:17.165755 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:17.165764 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:17.165773 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:17.165780 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:17.165786 | orchestrator | 2026-03-19 01:49:17.165792 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-19 01:49:17.165798 | orchestrator | Thursday 19 March 2026 01:48:54 +0000 (0:00:01.402) 0:08:51.423 ******** 2026-03-19 01:49:17.165803 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:49:17.165810 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:49:17.165816 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:49:17.165821 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:49:17.165827 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:49:17.165833 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:49:17.165839 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:49:17.165844 | orchestrator | 2026-03-19 01:49:17.165850 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-19 01:49:17.165856 | orchestrator | Thursday 19 March 2026 01:48:54 +0000 (0:00:00.449) 0:08:51.872 ******** 2026-03-19 01:49:17.165863 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:49:17.165871 | orchestrator | 2026-03-19 01:49:17.165877 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-19 01:49:17.165883 | orchestrator | Thursday 19 March 2026 01:48:55 +0000 (0:00:00.918) 0:08:52.791 ******** 2026-03-19 01:49:17.165889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:49:17.165905 | orchestrator | 2026-03-19 01:49:17.165911 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-19 01:49:17.165917 | orchestrator | Thursday 19 March 2026 01:48:56 +0000 (0:00:00.755) 0:08:53.547 ******** 2026-03-19 01:49:17.165923 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.165928 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.165934 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.165940 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.165945 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.165951 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.165957 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.165963 | orchestrator | 2026-03-19 01:49:17.165983 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-19 01:49:17.165989 | orchestrator | Thursday 19 March 2026 01:49:06 +0000 (0:00:10.299) 0:09:03.847 ******** 2026-03-19 01:49:17.165995 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166001 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166006 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166012 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166062 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166068 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166074 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166079 | orchestrator | 2026-03-19 01:49:17.166085 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-19 01:49:17.166091 | orchestrator | Thursday 19 March 2026 01:49:07 +0000 (0:00:00.854) 0:09:04.701 ******** 2026-03-19 01:49:17.166096 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166101 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166106 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166111 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166116 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166121 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166125 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166130 | orchestrator | 2026-03-19 01:49:17.166135 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-19 01:49:17.166141 | orchestrator | Thursday 19 March 2026 01:49:08 +0000 (0:00:01.269) 0:09:05.970 ******** 2026-03-19 01:49:17.166146 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166151 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166156 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166161 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166165 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166170 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166175 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166180 | orchestrator | 2026-03-19 01:49:17.166185 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-19 01:49:17.166190 | orchestrator | Thursday 19 March 2026 01:49:10 +0000 (0:00:01.812) 0:09:07.783 ******** 2026-03-19 01:49:17.166195 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166200 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166205 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166210 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166215 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166220 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166226 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166231 | orchestrator | 2026-03-19 01:49:17.166236 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-19 01:49:17.166241 | orchestrator | Thursday 19 March 2026 01:49:11 +0000 (0:00:01.176) 0:09:08.959 ******** 2026-03-19 01:49:17.166246 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166251 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166256 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166261 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166270 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166276 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166281 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166286 | orchestrator | 2026-03-19 01:49:17.166291 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-19 01:49:17.166296 | orchestrator | 2026-03-19 01:49:17.166305 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-19 01:49:17.166310 | orchestrator | Thursday 19 March 2026 01:49:12 +0000 (0:00:01.086) 0:09:10.046 ******** 2026-03-19 01:49:17.166316 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:49:17.166321 | orchestrator | 2026-03-19 01:49:17.166326 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-19 01:49:17.166331 | orchestrator | Thursday 19 March 2026 01:49:13 +0000 (0:00:00.714) 0:09:10.760 ******** 2026-03-19 01:49:17.166336 | orchestrator | ok: [testbed-manager] 2026-03-19 01:49:17.166341 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:17.166346 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:17.166351 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:17.166356 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:17.166361 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:17.166366 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:17.166371 | orchestrator | 2026-03-19 01:49:17.166376 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-19 01:49:17.166381 | orchestrator | Thursday 19 March 2026 01:49:14 +0000 (0:00:00.911) 0:09:11.672 ******** 2026-03-19 01:49:17.166386 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:17.166391 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:17.166396 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:17.166401 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:17.166406 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:17.166411 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:17.166416 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:17.166421 | orchestrator | 2026-03-19 01:49:17.166426 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-19 01:49:17.166431 | orchestrator | Thursday 19 March 2026 01:49:15 +0000 (0:00:01.042) 0:09:12.714 ******** 2026-03-19 01:49:17.166436 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:49:17.166441 | orchestrator | 2026-03-19 01:49:17.166446 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-19 01:49:17.166451 | orchestrator | Thursday 19 March 2026 01:49:16 +0000 (0:00:00.822) 0:09:13.536 ******** 2026-03-19 01:49:17.166456 | orchestrator | ok: [testbed-manager] 2026-03-19 01:49:17.166461 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:17.166466 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:17.166471 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:17.166476 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:17.166481 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:17.166502 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:17.166507 | orchestrator | 2026-03-19 01:49:17.166517 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-19 01:49:18.456168 | orchestrator | Thursday 19 March 2026 01:49:17 +0000 (0:00:00.779) 0:09:14.316 ******** 2026-03-19 01:49:18.456252 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:18.456259 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:18.456263 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:18.456267 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:18.456271 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:18.456330 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:18.456335 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:18.456339 | orchestrator | 2026-03-19 01:49:18.456344 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:49:18.456375 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-19 01:49:18.456381 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 01:49:18.456385 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 01:49:18.456389 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 01:49:18.456393 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-19 01:49:18.456396 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 01:49:18.456400 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-19 01:49:18.456404 | orchestrator | 2026-03-19 01:49:18.456408 | orchestrator | 2026-03-19 01:49:18.456411 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:49:18.456415 | orchestrator | Thursday 19 March 2026 01:49:18 +0000 (0:00:00.984) 0:09:15.300 ******** 2026-03-19 01:49:18.456419 | orchestrator | =============================================================================== 2026-03-19 01:49:18.456423 | orchestrator | osism.commons.packages : Download required packages -------------------- 89.58s 2026-03-19 01:49:18.456427 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.17s 2026-03-19 01:49:18.456430 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.83s 2026-03-19 01:49:18.456434 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.40s 2026-03-19 01:49:18.456450 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.37s 2026-03-19 01:49:18.456454 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.50s 2026-03-19 01:49:18.456457 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.17s 2026-03-19 01:49:18.456461 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.06s 2026-03-19 01:49:18.456466 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.68s 2026-03-19 01:49:18.456470 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.30s 2026-03-19 01:49:18.456474 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.59s 2026-03-19 01:49:18.456477 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.84s 2026-03-19 01:49:18.456481 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.67s 2026-03-19 01:49:18.456485 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.65s 2026-03-19 01:49:18.456519 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.21s 2026-03-19 01:49:18.456524 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.88s 2026-03-19 01:49:18.456530 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.83s 2026-03-19 01:49:18.456536 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.25s 2026-03-19 01:49:18.456540 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.09s 2026-03-19 01:49:18.456544 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.06s 2026-03-19 01:49:18.676419 | orchestrator | + osism apply fail2ban 2026-03-19 01:49:31.146355 | orchestrator | 2026-03-19 01:49:31 | INFO  | Task 92f1caef-ab8c-4999-be8c-4fe4bde2b566 (fail2ban) was prepared for execution. 2026-03-19 01:49:31.146552 | orchestrator | 2026-03-19 01:49:31 | INFO  | It takes a moment until task 92f1caef-ab8c-4999-be8c-4fe4bde2b566 (fail2ban) has been started and output is visible here. 2026-03-19 01:49:53.458457 | orchestrator | 2026-03-19 01:49:53.458686 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-19 01:49:53.458711 | orchestrator | 2026-03-19 01:49:53.458724 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-19 01:49:53.458736 | orchestrator | Thursday 19 March 2026 01:49:35 +0000 (0:00:00.259) 0:00:00.260 ******** 2026-03-19 01:49:53.458749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:49:53.458763 | orchestrator | 2026-03-19 01:49:53.458774 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-19 01:49:53.458786 | orchestrator | Thursday 19 March 2026 01:49:36 +0000 (0:00:01.088) 0:00:01.348 ******** 2026-03-19 01:49:53.458803 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:53.458822 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:53.458841 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:53.458859 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:53.458879 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:53.458897 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:53.458918 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:53.458936 | orchestrator | 2026-03-19 01:49:53.458957 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-19 01:49:53.458977 | orchestrator | Thursday 19 March 2026 01:49:48 +0000 (0:00:12.078) 0:00:13.426 ******** 2026-03-19 01:49:53.459000 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:53.459020 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:53.459041 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:53.459062 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:53.459080 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:53.459099 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:53.459119 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:53.459138 | orchestrator | 2026-03-19 01:49:53.459157 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-19 01:49:53.459176 | orchestrator | Thursday 19 March 2026 01:49:50 +0000 (0:00:01.405) 0:00:14.831 ******** 2026-03-19 01:49:53.459196 | orchestrator | ok: [testbed-manager] 2026-03-19 01:49:53.459215 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:49:53.459235 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:49:53.459254 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:49:53.459272 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:49:53.459292 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:49:53.459310 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:49:53.459329 | orchestrator | 2026-03-19 01:49:53.459348 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-19 01:49:53.459366 | orchestrator | Thursday 19 March 2026 01:49:51 +0000 (0:00:01.429) 0:00:16.261 ******** 2026-03-19 01:49:53.459387 | orchestrator | changed: [testbed-manager] 2026-03-19 01:49:53.459405 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:49:53.459424 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:49:53.459442 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:49:53.459462 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:49:53.459481 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:49:53.459499 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:49:53.459518 | orchestrator | 2026-03-19 01:49:53.459537 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:49:53.459557 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459648 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459670 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459689 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459709 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459727 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459743 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:49:53.459755 | orchestrator | 2026-03-19 01:49:53.459765 | orchestrator | 2026-03-19 01:49:53.459776 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:49:53.459791 | orchestrator | Thursday 19 March 2026 01:49:53 +0000 (0:00:01.580) 0:00:17.842 ******** 2026-03-19 01:49:53.459809 | orchestrator | =============================================================================== 2026-03-19 01:49:53.459843 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.08s 2026-03-19 01:49:53.459876 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.58s 2026-03-19 01:49:53.459895 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.43s 2026-03-19 01:49:53.459915 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.41s 2026-03-19 01:49:53.459933 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-03-19 01:49:53.751310 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-19 01:49:53.751416 | orchestrator | + osism apply network 2026-03-19 01:50:05.748571 | orchestrator | 2026-03-19 01:50:05 | INFO  | Task bce1d0d7-be77-4b6b-8c07-6e9cbf912366 (network) was prepared for execution. 2026-03-19 01:50:05.748725 | orchestrator | 2026-03-19 01:50:05 | INFO  | It takes a moment until task bce1d0d7-be77-4b6b-8c07-6e9cbf912366 (network) has been started and output is visible here. 2026-03-19 01:50:33.170281 | orchestrator | 2026-03-19 01:50:33.170390 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-19 01:50:33.170404 | orchestrator | 2026-03-19 01:50:33.170413 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-19 01:50:33.170422 | orchestrator | Thursday 19 March 2026 01:50:09 +0000 (0:00:00.188) 0:00:00.188 ******** 2026-03-19 01:50:33.170431 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.170440 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.170448 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.170457 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.170465 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.170473 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.170480 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.170489 | orchestrator | 2026-03-19 01:50:33.170497 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-19 01:50:33.170505 | orchestrator | Thursday 19 March 2026 01:50:10 +0000 (0:00:00.613) 0:00:00.801 ******** 2026-03-19 01:50:33.170517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:50:33.170533 | orchestrator | 2026-03-19 01:50:33.170545 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-19 01:50:33.170558 | orchestrator | Thursday 19 March 2026 01:50:11 +0000 (0:00:01.044) 0:00:01.846 ******** 2026-03-19 01:50:33.170601 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.170616 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.170625 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.170632 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.170640 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.170648 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.170655 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.170752 | orchestrator | 2026-03-19 01:50:33.170765 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-19 01:50:33.170773 | orchestrator | Thursday 19 March 2026 01:50:13 +0000 (0:00:01.745) 0:00:03.591 ******** 2026-03-19 01:50:33.170781 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.170789 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.170796 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.170805 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.170812 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.170820 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.170828 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.170838 | orchestrator | 2026-03-19 01:50:33.170847 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-19 01:50:33.170856 | orchestrator | Thursday 19 March 2026 01:50:14 +0000 (0:00:01.679) 0:00:05.271 ******** 2026-03-19 01:50:33.170865 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-19 01:50:33.170875 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-19 01:50:33.170885 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-19 01:50:33.170894 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-19 01:50:33.170902 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-19 01:50:33.170928 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-19 01:50:33.170936 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-19 01:50:33.170944 | orchestrator | 2026-03-19 01:50:33.170952 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-19 01:50:33.170964 | orchestrator | Thursday 19 March 2026 01:50:15 +0000 (0:00:00.956) 0:00:06.227 ******** 2026-03-19 01:50:33.170972 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:50:33.170981 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:50:33.170989 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:50:33.170997 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 01:50:33.171004 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 01:50:33.171012 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:50:33.171020 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:50:33.171028 | orchestrator | 2026-03-19 01:50:33.171036 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-19 01:50:33.171044 | orchestrator | Thursday 19 March 2026 01:50:18 +0000 (0:00:03.207) 0:00:09.434 ******** 2026-03-19 01:50:33.171052 | orchestrator | changed: [testbed-manager] 2026-03-19 01:50:33.171060 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:50:33.171067 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:50:33.171075 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:50:33.171083 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:50:33.171090 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:50:33.171098 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:50:33.171106 | orchestrator | 2026-03-19 01:50:33.171114 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-19 01:50:33.171122 | orchestrator | Thursday 19 March 2026 01:50:20 +0000 (0:00:01.714) 0:00:11.148 ******** 2026-03-19 01:50:33.171130 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 01:50:33.171138 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 01:50:33.171145 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 01:50:33.171153 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 01:50:33.171161 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 01:50:33.171169 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 01:50:33.171184 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 01:50:33.171192 | orchestrator | 2026-03-19 01:50:33.171200 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-19 01:50:33.171208 | orchestrator | Thursday 19 March 2026 01:50:22 +0000 (0:00:01.734) 0:00:12.883 ******** 2026-03-19 01:50:33.171216 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.171224 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.171232 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.171239 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.171247 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.171255 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.171262 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.171270 | orchestrator | 2026-03-19 01:50:33.171278 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-19 01:50:33.171304 | orchestrator | Thursday 19 March 2026 01:50:23 +0000 (0:00:01.141) 0:00:14.024 ******** 2026-03-19 01:50:33.171313 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:50:33.171321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:33.171329 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:33.171337 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:33.171344 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:33.171352 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:33.171360 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:33.171368 | orchestrator | 2026-03-19 01:50:33.171376 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-19 01:50:33.171384 | orchestrator | Thursday 19 March 2026 01:50:24 +0000 (0:00:00.663) 0:00:14.688 ******** 2026-03-19 01:50:33.171392 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.171400 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.171408 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.171415 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.171423 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.171431 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.171439 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.171447 | orchestrator | 2026-03-19 01:50:33.171455 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-19 01:50:33.171463 | orchestrator | Thursday 19 March 2026 01:50:26 +0000 (0:00:02.287) 0:00:16.976 ******** 2026-03-19 01:50:33.171471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:33.171479 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:33.171486 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:33.171494 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:33.171502 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:33.171510 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:33.171519 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-19 01:50:33.171528 | orchestrator | 2026-03-19 01:50:33.171536 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-19 01:50:33.171544 | orchestrator | Thursday 19 March 2026 01:50:27 +0000 (0:00:00.869) 0:00:17.845 ******** 2026-03-19 01:50:33.171552 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.171560 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:50:33.171568 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:50:33.171575 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:50:33.171583 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:50:33.171591 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:50:33.171599 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:50:33.171607 | orchestrator | 2026-03-19 01:50:33.171615 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-19 01:50:33.171623 | orchestrator | Thursday 19 March 2026 01:50:29 +0000 (0:00:01.636) 0:00:19.482 ******** 2026-03-19 01:50:33.171631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:50:33.171646 | orchestrator | 2026-03-19 01:50:33.171654 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-19 01:50:33.171662 | orchestrator | Thursday 19 March 2026 01:50:30 +0000 (0:00:01.189) 0:00:20.671 ******** 2026-03-19 01:50:33.171670 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.171678 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.171707 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.171719 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.171727 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.171735 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.171743 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.171751 | orchestrator | 2026-03-19 01:50:33.171758 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-19 01:50:33.171766 | orchestrator | Thursday 19 March 2026 01:50:31 +0000 (0:00:01.099) 0:00:21.771 ******** 2026-03-19 01:50:33.171774 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:33.171782 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:33.171790 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:33.171797 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:33.171805 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:33.171813 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:33.171820 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:33.171828 | orchestrator | 2026-03-19 01:50:33.171836 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-19 01:50:33.171844 | orchestrator | Thursday 19 March 2026 01:50:31 +0000 (0:00:00.632) 0:00:22.403 ******** 2026-03-19 01:50:33.171851 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171859 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171867 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171875 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171882 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171890 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171898 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171906 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171913 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171921 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171929 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171936 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-19 01:50:33.171944 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171952 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-19 01:50:33.171965 | orchestrator | 2026-03-19 01:50:33.171986 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-19 01:50:47.900894 | orchestrator | Thursday 19 March 2026 01:50:33 +0000 (0:00:01.191) 0:00:23.595 ******** 2026-03-19 01:50:47.901018 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:50:47.901034 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:47.901046 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:47.901057 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:47.901068 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:47.901079 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:47.901089 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:47.901100 | orchestrator | 2026-03-19 01:50:47.901112 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-19 01:50:47.901154 | orchestrator | Thursday 19 March 2026 01:50:33 +0000 (0:00:00.583) 0:00:24.179 ******** 2026-03-19 01:50:47.901167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-4, testbed-node-2, testbed-node-3, testbed-node-5 2026-03-19 01:50:47.901180 | orchestrator | 2026-03-19 01:50:47.901191 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-19 01:50:47.901202 | orchestrator | Thursday 19 March 2026 01:50:37 +0000 (0:00:04.148) 0:00:28.327 ******** 2026-03-19 01:50:47.901215 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901262 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901443 | orchestrator | 2026-03-19 01:50:47.901463 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-19 01:50:47.901483 | orchestrator | Thursday 19 March 2026 01:50:42 +0000 (0:00:05.003) 0:00:33.331 ******** 2026-03-19 01:50:47.901503 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901559 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-19 01:50:47.901662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:47.901880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:53.057583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-19 01:50:53.057665 | orchestrator | 2026-03-19 01:50:53.057671 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-19 01:50:53.057677 | orchestrator | Thursday 19 March 2026 01:50:47 +0000 (0:00:04.993) 0:00:38.325 ******** 2026-03-19 01:50:53.057682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:50:53.057687 | orchestrator | 2026-03-19 01:50:53.057691 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-19 01:50:53.057695 | orchestrator | Thursday 19 March 2026 01:50:48 +0000 (0:00:01.113) 0:00:39.439 ******** 2026-03-19 01:50:53.057699 | orchestrator | ok: [testbed-manager] 2026-03-19 01:50:53.057704 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:50:53.057708 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:50:53.057711 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:50:53.057715 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:50:53.057719 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:50:53.057722 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:50:53.057726 | orchestrator | 2026-03-19 01:50:53.057778 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-19 01:50:53.057782 | orchestrator | Thursday 19 March 2026 01:50:50 +0000 (0:00:01.004) 0:00:40.443 ******** 2026-03-19 01:50:53.057786 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057791 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057795 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057799 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057802 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057806 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057810 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057814 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057818 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:50:53.057823 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057841 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057845 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057849 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057853 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:53.057857 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057879 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057883 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057887 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057891 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:53.057894 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057898 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057902 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057906 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057910 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:53.057914 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057917 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057921 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057925 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057929 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:53.057932 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:53.057936 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-19 01:50:53.057940 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-19 01:50:53.057944 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-19 01:50:53.057947 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-19 01:50:53.057951 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:53.057955 | orchestrator | 2026-03-19 01:50:53.057959 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-19 01:50:53.057973 | orchestrator | Thursday 19 March 2026 01:50:51 +0000 (0:00:01.670) 0:00:42.114 ******** 2026-03-19 01:50:53.057977 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:50:53.057981 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:53.057984 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:53.057988 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:53.057992 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:53.057996 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:53.057999 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:53.058003 | orchestrator | 2026-03-19 01:50:53.058007 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-19 01:50:53.058011 | orchestrator | Thursday 19 March 2026 01:50:52 +0000 (0:00:00.535) 0:00:42.649 ******** 2026-03-19 01:50:53.058044 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:50:53.058049 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:50:53.058052 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:50:53.058056 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:50:53.058060 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:50:53.058064 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:50:53.058068 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:50:53.058072 | orchestrator | 2026-03-19 01:50:53.058076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:50:53.058080 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 01:50:53.058086 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058090 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058097 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058101 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058105 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058109 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 01:50:53.058113 | orchestrator | 2026-03-19 01:50:53.058116 | orchestrator | 2026-03-19 01:50:53.058120 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:50:53.058124 | orchestrator | Thursday 19 March 2026 01:50:52 +0000 (0:00:00.558) 0:00:43.208 ******** 2026-03-19 01:50:53.058131 | orchestrator | =============================================================================== 2026-03-19 01:50:53.058135 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.00s 2026-03-19 01:50:53.058139 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.99s 2026-03-19 01:50:53.058142 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.15s 2026-03-19 01:50:53.058146 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.21s 2026-03-19 01:50:53.058150 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-03-19 01:50:53.058154 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.75s 2026-03-19 01:50:53.058158 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.73s 2026-03-19 01:50:53.058163 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2026-03-19 01:50:53.058167 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2026-03-19 01:50:53.058171 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.67s 2026-03-19 01:50:53.058176 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2026-03-19 01:50:53.058180 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2026-03-19 01:50:53.058184 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2026-03-19 01:50:53.058188 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-03-19 01:50:53.058192 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2026-03-19 01:50:53.058197 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-03-19 01:50:53.058201 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.04s 2026-03-19 01:50:53.058205 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2026-03-19 01:50:53.058210 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2026-03-19 01:50:53.058214 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2026-03-19 01:50:53.324679 | orchestrator | + osism apply wireguard 2026-03-19 01:51:05.331185 | orchestrator | 2026-03-19 01:51:05 | INFO  | Task 02baa0e9-fc22-47bf-8d81-306298b2cb63 (wireguard) was prepared for execution. 2026-03-19 01:51:05.331288 | orchestrator | 2026-03-19 01:51:05 | INFO  | It takes a moment until task 02baa0e9-fc22-47bf-8d81-306298b2cb63 (wireguard) has been started and output is visible here. 2026-03-19 01:51:22.504240 | orchestrator | 2026-03-19 01:51:22.504338 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-19 01:51:22.504346 | orchestrator | 2026-03-19 01:51:22.504374 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-19 01:51:22.504379 | orchestrator | Thursday 19 March 2026 01:51:09 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-03-19 01:51:22.504383 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:22.504388 | orchestrator | 2026-03-19 01:51:22.504392 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-19 01:51:22.504396 | orchestrator | Thursday 19 March 2026 01:51:10 +0000 (0:00:01.031) 0:00:01.234 ******** 2026-03-19 01:51:22.504400 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504404 | orchestrator | 2026-03-19 01:51:22.504411 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-19 01:51:22.504415 | orchestrator | Thursday 19 March 2026 01:51:15 +0000 (0:00:04.958) 0:00:06.193 ******** 2026-03-19 01:51:22.504419 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504423 | orchestrator | 2026-03-19 01:51:22.504427 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-19 01:51:22.504430 | orchestrator | Thursday 19 March 2026 01:51:15 +0000 (0:00:00.475) 0:00:06.669 ******** 2026-03-19 01:51:22.504434 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504438 | orchestrator | 2026-03-19 01:51:22.504442 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-19 01:51:22.504446 | orchestrator | Thursday 19 March 2026 01:51:16 +0000 (0:00:00.372) 0:00:07.041 ******** 2026-03-19 01:51:22.504450 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:22.504455 | orchestrator | 2026-03-19 01:51:22.504462 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-19 01:51:22.504468 | orchestrator | Thursday 19 March 2026 01:51:16 +0000 (0:00:00.563) 0:00:07.604 ******** 2026-03-19 01:51:22.504474 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:22.504480 | orchestrator | 2026-03-19 01:51:22.504486 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-19 01:51:22.504491 | orchestrator | Thursday 19 March 2026 01:51:17 +0000 (0:00:00.399) 0:00:08.004 ******** 2026-03-19 01:51:22.504497 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:22.504502 | orchestrator | 2026-03-19 01:51:22.504508 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-19 01:51:22.504514 | orchestrator | Thursday 19 March 2026 01:51:17 +0000 (0:00:00.398) 0:00:08.402 ******** 2026-03-19 01:51:22.504521 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504526 | orchestrator | 2026-03-19 01:51:22.504532 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-19 01:51:22.504539 | orchestrator | Thursday 19 March 2026 01:51:18 +0000 (0:00:01.197) 0:00:09.599 ******** 2026-03-19 01:51:22.504545 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-19 01:51:22.504551 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504559 | orchestrator | 2026-03-19 01:51:22.504564 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-19 01:51:22.504570 | orchestrator | Thursday 19 March 2026 01:51:19 +0000 (0:00:00.924) 0:00:10.524 ******** 2026-03-19 01:51:22.504576 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504582 | orchestrator | 2026-03-19 01:51:22.504589 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-19 01:51:22.504594 | orchestrator | Thursday 19 March 2026 01:51:21 +0000 (0:00:01.610) 0:00:12.135 ******** 2026-03-19 01:51:22.504600 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:22.504605 | orchestrator | 2026-03-19 01:51:22.504611 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:51:22.504617 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:51:22.504625 | orchestrator | 2026-03-19 01:51:22.504630 | orchestrator | 2026-03-19 01:51:22.504637 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:51:22.504642 | orchestrator | Thursday 19 March 2026 01:51:22 +0000 (0:00:00.894) 0:00:13.029 ******** 2026-03-19 01:51:22.504658 | orchestrator | =============================================================================== 2026-03-19 01:51:22.504665 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 4.96s 2026-03-19 01:51:22.504672 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.61s 2026-03-19 01:51:22.504679 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-03-19 01:51:22.504685 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.03s 2026-03-19 01:51:22.504692 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-03-19 01:51:22.504698 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2026-03-19 01:51:22.504704 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-03-19 01:51:22.504710 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.48s 2026-03-19 01:51:22.504716 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-03-19 01:51:22.504722 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-03-19 01:51:22.504729 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.37s 2026-03-19 01:51:22.786298 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-19 01:51:22.818059 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-19 01:51:22.818154 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-19 01:51:22.891326 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 190 0 --:--:-- --:--:-- --:--:-- 191 2026-03-19 01:51:22.907835 | orchestrator | + osism apply --environment custom workarounds 2026-03-19 01:51:24.773210 | orchestrator | 2026-03-19 01:51:24 | INFO  | Trying to run play workarounds in environment custom 2026-03-19 01:51:34.921495 | orchestrator | 2026-03-19 01:51:34 | INFO  | Task 5759a398-1a46-4032-a8cf-52fb175e681f (workarounds) was prepared for execution. 2026-03-19 01:51:34.921605 | orchestrator | 2026-03-19 01:51:34 | INFO  | It takes a moment until task 5759a398-1a46-4032-a8cf-52fb175e681f (workarounds) has been started and output is visible here. 2026-03-19 01:51:59.793543 | orchestrator | 2026-03-19 01:51:59.793708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 01:51:59.793740 | orchestrator | 2026-03-19 01:51:59.793761 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-19 01:51:59.793782 | orchestrator | Thursday 19 March 2026 01:51:39 +0000 (0:00:00.131) 0:00:00.131 ******** 2026-03-19 01:51:59.793801 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793819 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793838 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793856 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793910 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793930 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793947 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-19 01:51:59.793965 | orchestrator | 2026-03-19 01:51:59.793984 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-19 01:51:59.794003 | orchestrator | 2026-03-19 01:51:59.794086 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-19 01:51:59.794107 | orchestrator | Thursday 19 March 2026 01:51:39 +0000 (0:00:00.766) 0:00:00.897 ******** 2026-03-19 01:51:59.794126 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:59.794146 | orchestrator | 2026-03-19 01:51:59.794205 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-19 01:51:59.794267 | orchestrator | 2026-03-19 01:51:59.794376 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-19 01:51:59.794408 | orchestrator | Thursday 19 March 2026 01:51:42 +0000 (0:00:02.340) 0:00:03.237 ******** 2026-03-19 01:51:59.794426 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:51:59.794446 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:51:59.794465 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:51:59.794484 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:51:59.794502 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:51:59.794520 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:51:59.794539 | orchestrator | 2026-03-19 01:51:59.794557 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-19 01:51:59.794574 | orchestrator | 2026-03-19 01:51:59.794593 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-19 01:51:59.794636 | orchestrator | Thursday 19 March 2026 01:51:44 +0000 (0:00:01.933) 0:00:05.171 ******** 2026-03-19 01:51:59.794659 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794681 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794700 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794721 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794739 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794759 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-19 01:51:59.794778 | orchestrator | 2026-03-19 01:51:59.794799 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-19 01:51:59.794819 | orchestrator | Thursday 19 March 2026 01:51:45 +0000 (0:00:01.463) 0:00:06.635 ******** 2026-03-19 01:51:59.794838 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:51:59.794859 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:51:59.794913 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:51:59.794934 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:51:59.794952 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:51:59.794970 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:51:59.794988 | orchestrator | 2026-03-19 01:51:59.795005 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-19 01:51:59.795022 | orchestrator | Thursday 19 March 2026 01:51:49 +0000 (0:00:03.710) 0:00:10.345 ******** 2026-03-19 01:51:59.795039 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:51:59.795058 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:51:59.795078 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:51:59.795097 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:51:59.795115 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:51:59.795133 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:51:59.795152 | orchestrator | 2026-03-19 01:51:59.795171 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-19 01:51:59.795190 | orchestrator | 2026-03-19 01:51:59.795208 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-19 01:51:59.795224 | orchestrator | Thursday 19 March 2026 01:51:49 +0000 (0:00:00.641) 0:00:10.987 ******** 2026-03-19 01:51:59.795242 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:51:59.795259 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:51:59.795276 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:51:59.795293 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:51:59.795312 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:51:59.795329 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:59.795346 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:51:59.795363 | orchestrator | 2026-03-19 01:51:59.795378 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-19 01:51:59.795419 | orchestrator | Thursday 19 March 2026 01:51:51 +0000 (0:00:01.500) 0:00:12.487 ******** 2026-03-19 01:51:59.795438 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:51:59.795455 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:51:59.795474 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:51:59.795493 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:51:59.795510 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:51:59.795528 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:51:59.795582 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:59.795601 | orchestrator | 2026-03-19 01:51:59.795620 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-19 01:51:59.795639 | orchestrator | Thursday 19 March 2026 01:51:52 +0000 (0:00:01.442) 0:00:13.929 ******** 2026-03-19 01:51:59.795657 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:51:59.795674 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:51:59.795691 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:51:59.795707 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:51:59.795725 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:51:59.795741 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:59.795758 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:51:59.795775 | orchestrator | 2026-03-19 01:51:59.795792 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-19 01:51:59.795810 | orchestrator | Thursday 19 March 2026 01:51:54 +0000 (0:00:01.455) 0:00:15.384 ******** 2026-03-19 01:51:59.795827 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:51:59.795843 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:51:59.795859 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:51:59.795912 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:51:59.795932 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:51:59.795949 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:51:59.795966 | orchestrator | changed: [testbed-manager] 2026-03-19 01:51:59.795984 | orchestrator | 2026-03-19 01:51:59.796000 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-19 01:51:59.796016 | orchestrator | Thursday 19 March 2026 01:51:56 +0000 (0:00:01.732) 0:00:17.117 ******** 2026-03-19 01:51:59.796034 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:51:59.796049 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:51:59.796067 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:51:59.796084 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:51:59.796101 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:51:59.796118 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:51:59.796136 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:51:59.796153 | orchestrator | 2026-03-19 01:51:59.796169 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-19 01:51:59.796187 | orchestrator | 2026-03-19 01:51:59.796204 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-19 01:51:59.796222 | orchestrator | Thursday 19 March 2026 01:51:56 +0000 (0:00:00.580) 0:00:17.697 ******** 2026-03-19 01:51:59.796240 | orchestrator | ok: [testbed-manager] 2026-03-19 01:51:59.796258 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:51:59.796277 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:51:59.796295 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:51:59.796329 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:51:59.796349 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:51:59.796366 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:51:59.796385 | orchestrator | 2026-03-19 01:51:59.796402 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:51:59.796422 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:51:59.796442 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796478 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796498 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796516 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796535 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796554 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:51:59.796574 | orchestrator | 2026-03-19 01:51:59.796594 | orchestrator | 2026-03-19 01:51:59.796612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:51:59.796631 | orchestrator | Thursday 19 March 2026 01:51:59 +0000 (0:00:03.146) 0:00:20.844 ******** 2026-03-19 01:51:59.796649 | orchestrator | =============================================================================== 2026-03-19 01:51:59.796666 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2026-03-19 01:51:59.796684 | orchestrator | Install python3-docker -------------------------------------------------- 3.15s 2026-03-19 01:51:59.796703 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2026-03-19 01:51:59.796721 | orchestrator | Apply netplan configuration --------------------------------------------- 1.93s 2026-03-19 01:51:59.796738 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.73s 2026-03-19 01:51:59.796756 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.50s 2026-03-19 01:51:59.796774 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2026-03-19 01:51:59.796792 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2026-03-19 01:51:59.796811 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.44s 2026-03-19 01:51:59.796829 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2026-03-19 01:51:59.796847 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2026-03-19 01:51:59.797065 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2026-03-19 01:52:00.330824 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-19 01:52:12.341660 | orchestrator | 2026-03-19 01:52:12 | INFO  | Task 44b966b5-e685-475d-a235-cec4e7894868 (reboot) was prepared for execution. 2026-03-19 01:52:12.341766 | orchestrator | 2026-03-19 01:52:12 | INFO  | It takes a moment until task 44b966b5-e685-475d-a235-cec4e7894868 (reboot) has been started and output is visible here. 2026-03-19 01:52:22.763057 | orchestrator | 2026-03-19 01:52:22.763148 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763156 | orchestrator | 2026-03-19 01:52:22.763161 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763166 | orchestrator | Thursday 19 March 2026 01:52:16 +0000 (0:00:00.206) 0:00:00.206 ******** 2026-03-19 01:52:22.763171 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:52:22.763176 | orchestrator | 2026-03-19 01:52:22.763180 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763185 | orchestrator | Thursday 19 March 2026 01:52:16 +0000 (0:00:00.105) 0:00:00.312 ******** 2026-03-19 01:52:22.763190 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:52:22.763194 | orchestrator | 2026-03-19 01:52:22.763197 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763201 | orchestrator | Thursday 19 March 2026 01:52:17 +0000 (0:00:00.966) 0:00:01.278 ******** 2026-03-19 01:52:22.763226 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:52:22.763230 | orchestrator | 2026-03-19 01:52:22.763234 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763237 | orchestrator | 2026-03-19 01:52:22.763241 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763245 | orchestrator | Thursday 19 March 2026 01:52:17 +0000 (0:00:00.126) 0:00:01.404 ******** 2026-03-19 01:52:22.763249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:52:22.763253 | orchestrator | 2026-03-19 01:52:22.763256 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763260 | orchestrator | Thursday 19 March 2026 01:52:17 +0000 (0:00:00.092) 0:00:01.497 ******** 2026-03-19 01:52:22.763264 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:52:22.763268 | orchestrator | 2026-03-19 01:52:22.763283 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763287 | orchestrator | Thursday 19 March 2026 01:52:18 +0000 (0:00:00.694) 0:00:02.192 ******** 2026-03-19 01:52:22.763291 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:52:22.763294 | orchestrator | 2026-03-19 01:52:22.763298 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763302 | orchestrator | 2026-03-19 01:52:22.763306 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763309 | orchestrator | Thursday 19 March 2026 01:52:18 +0000 (0:00:00.117) 0:00:02.310 ******** 2026-03-19 01:52:22.763313 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:52:22.763317 | orchestrator | 2026-03-19 01:52:22.763320 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763324 | orchestrator | Thursday 19 March 2026 01:52:18 +0000 (0:00:00.231) 0:00:02.542 ******** 2026-03-19 01:52:22.763328 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:52:22.763332 | orchestrator | 2026-03-19 01:52:22.763336 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763339 | orchestrator | Thursday 19 March 2026 01:52:19 +0000 (0:00:00.712) 0:00:03.254 ******** 2026-03-19 01:52:22.763343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:52:22.763347 | orchestrator | 2026-03-19 01:52:22.763350 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763354 | orchestrator | 2026-03-19 01:52:22.763358 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763361 | orchestrator | Thursday 19 March 2026 01:52:19 +0000 (0:00:00.112) 0:00:03.367 ******** 2026-03-19 01:52:22.763365 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:52:22.763369 | orchestrator | 2026-03-19 01:52:22.763372 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763376 | orchestrator | Thursday 19 March 2026 01:52:19 +0000 (0:00:00.123) 0:00:03.490 ******** 2026-03-19 01:52:22.763380 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:52:22.763384 | orchestrator | 2026-03-19 01:52:22.763387 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763391 | orchestrator | Thursday 19 March 2026 01:52:20 +0000 (0:00:00.683) 0:00:04.174 ******** 2026-03-19 01:52:22.763395 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:52:22.763398 | orchestrator | 2026-03-19 01:52:22.763402 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763406 | orchestrator | 2026-03-19 01:52:22.763409 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763413 | orchestrator | Thursday 19 March 2026 01:52:20 +0000 (0:00:00.113) 0:00:04.288 ******** 2026-03-19 01:52:22.763417 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:52:22.763421 | orchestrator | 2026-03-19 01:52:22.763424 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763428 | orchestrator | Thursday 19 March 2026 01:52:20 +0000 (0:00:00.095) 0:00:04.383 ******** 2026-03-19 01:52:22.763435 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:52:22.763439 | orchestrator | 2026-03-19 01:52:22.763443 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763447 | orchestrator | Thursday 19 March 2026 01:52:21 +0000 (0:00:00.657) 0:00:05.041 ******** 2026-03-19 01:52:22.763450 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:52:22.763454 | orchestrator | 2026-03-19 01:52:22.763458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-19 01:52:22.763462 | orchestrator | 2026-03-19 01:52:22.763466 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-19 01:52:22.763469 | orchestrator | Thursday 19 March 2026 01:52:21 +0000 (0:00:00.111) 0:00:05.153 ******** 2026-03-19 01:52:22.763473 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:52:22.763477 | orchestrator | 2026-03-19 01:52:22.763480 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-19 01:52:22.763484 | orchestrator | Thursday 19 March 2026 01:52:21 +0000 (0:00:00.102) 0:00:05.255 ******** 2026-03-19 01:52:22.763488 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:52:22.763492 | orchestrator | 2026-03-19 01:52:22.763495 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-19 01:52:22.763499 | orchestrator | Thursday 19 March 2026 01:52:22 +0000 (0:00:00.705) 0:00:05.960 ******** 2026-03-19 01:52:22.763512 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:52:22.763516 | orchestrator | 2026-03-19 01:52:22.763520 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:52:22.763525 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763529 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763533 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763537 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763540 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763544 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:52:22.763548 | orchestrator | 2026-03-19 01:52:22.763552 | orchestrator | 2026-03-19 01:52:22.763555 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:52:22.763559 | orchestrator | Thursday 19 March 2026 01:52:22 +0000 (0:00:00.039) 0:00:06.000 ******** 2026-03-19 01:52:22.763566 | orchestrator | =============================================================================== 2026-03-19 01:52:22.763569 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.42s 2026-03-19 01:52:22.763573 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-03-19 01:52:22.763577 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-03-19 01:52:23.073036 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-19 01:52:35.111798 | orchestrator | 2026-03-19 01:52:35 | INFO  | Task 31d0bcef-b3cd-428b-bb54-7acefaac3dfc (wait-for-connection) was prepared for execution. 2026-03-19 01:52:35.111918 | orchestrator | 2026-03-19 01:52:35 | INFO  | It takes a moment until task 31d0bcef-b3cd-428b-bb54-7acefaac3dfc (wait-for-connection) has been started and output is visible here. 2026-03-19 01:52:51.115483 | orchestrator | 2026-03-19 01:52:51.115606 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-19 01:52:51.115649 | orchestrator | 2026-03-19 01:52:51.115661 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-19 01:52:51.115672 | orchestrator | Thursday 19 March 2026 01:52:39 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-03-19 01:52:51.115682 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:52:51.115693 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:52:51.115703 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:52:51.115712 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:52:51.115721 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:52:51.115731 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:52:51.115740 | orchestrator | 2026-03-19 01:52:51.115750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:52:51.115760 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115771 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115781 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115790 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115800 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115809 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:52:51.115819 | orchestrator | 2026-03-19 01:52:51.115829 | orchestrator | 2026-03-19 01:52:51.115839 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:52:51.115848 | orchestrator | Thursday 19 March 2026 01:52:50 +0000 (0:00:11.459) 0:00:11.696 ******** 2026-03-19 01:52:51.115858 | orchestrator | =============================================================================== 2026-03-19 01:52:51.115870 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.46s 2026-03-19 01:52:51.382123 | orchestrator | + osism apply hddtemp 2026-03-19 01:53:03.420300 | orchestrator | 2026-03-19 01:53:03 | INFO  | Task bf865ea8-5744-406c-ad62-c38c75ffc86d (hddtemp) was prepared for execution. 2026-03-19 01:53:03.420395 | orchestrator | 2026-03-19 01:53:03 | INFO  | It takes a moment until task bf865ea8-5744-406c-ad62-c38c75ffc86d (hddtemp) has been started and output is visible here. 2026-03-19 01:53:31.626378 | orchestrator | 2026-03-19 01:53:31.626502 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-19 01:53:31.626519 | orchestrator | 2026-03-19 01:53:31.626531 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-19 01:53:31.626542 | orchestrator | Thursday 19 March 2026 01:53:07 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-03-19 01:53:31.626554 | orchestrator | ok: [testbed-manager] 2026-03-19 01:53:31.626566 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:53:31.626577 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:53:31.626587 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:53:31.626598 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:53:31.626608 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:53:31.626619 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:53:31.626629 | orchestrator | 2026-03-19 01:53:31.626640 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-19 01:53:31.626651 | orchestrator | Thursday 19 March 2026 01:53:07 +0000 (0:00:00.581) 0:00:00.806 ******** 2026-03-19 01:53:31.626663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:53:31.626676 | orchestrator | 2026-03-19 01:53:31.626713 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-19 01:53:31.626725 | orchestrator | Thursday 19 March 2026 01:53:09 +0000 (0:00:01.067) 0:00:01.873 ******** 2026-03-19 01:53:31.626736 | orchestrator | ok: [testbed-manager] 2026-03-19 01:53:31.626747 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:53:31.626757 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:53:31.626767 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:53:31.626778 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:53:31.626789 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:53:31.626799 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:53:31.626810 | orchestrator | 2026-03-19 01:53:31.626837 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-19 01:53:31.626848 | orchestrator | Thursday 19 March 2026 01:53:11 +0000 (0:00:02.172) 0:00:04.046 ******** 2026-03-19 01:53:31.626859 | orchestrator | changed: [testbed-manager] 2026-03-19 01:53:31.626870 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:53:31.626881 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:53:31.626892 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:53:31.626902 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:53:31.626915 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:53:31.626927 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:53:31.626939 | orchestrator | 2026-03-19 01:53:31.626951 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-19 01:53:31.626963 | orchestrator | Thursday 19 March 2026 01:53:12 +0000 (0:00:01.131) 0:00:05.177 ******** 2026-03-19 01:53:31.626975 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:53:31.626988 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:53:31.627000 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:53:31.627012 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:53:31.627024 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:53:31.627036 | orchestrator | ok: [testbed-manager] 2026-03-19 01:53:31.627049 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:53:31.627083 | orchestrator | 2026-03-19 01:53:31.627096 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-19 01:53:31.627109 | orchestrator | Thursday 19 March 2026 01:53:13 +0000 (0:00:01.112) 0:00:06.290 ******** 2026-03-19 01:53:31.627121 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:53:31.627133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:53:31.627145 | orchestrator | changed: [testbed-manager] 2026-03-19 01:53:31.627157 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:53:31.627169 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:53:31.627181 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:53:31.627193 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:53:31.627205 | orchestrator | 2026-03-19 01:53:31.627216 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-19 01:53:31.627227 | orchestrator | Thursday 19 March 2026 01:53:14 +0000 (0:00:00.755) 0:00:07.046 ******** 2026-03-19 01:53:31.627237 | orchestrator | changed: [testbed-manager] 2026-03-19 01:53:31.627248 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:53:31.627258 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:53:31.627269 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:53:31.627279 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:53:31.627290 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:53:31.627300 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:53:31.627310 | orchestrator | 2026-03-19 01:53:31.627322 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-19 01:53:31.627332 | orchestrator | Thursday 19 March 2026 01:53:28 +0000 (0:00:14.074) 0:00:21.121 ******** 2026-03-19 01:53:31.627343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:53:31.627354 | orchestrator | 2026-03-19 01:53:31.627365 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-19 01:53:31.627386 | orchestrator | Thursday 19 March 2026 01:53:29 +0000 (0:00:01.167) 0:00:22.288 ******** 2026-03-19 01:53:31.627397 | orchestrator | changed: [testbed-manager] 2026-03-19 01:53:31.627407 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:53:31.627418 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:53:31.627429 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:53:31.627439 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:53:31.627450 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:53:31.627460 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:53:31.627471 | orchestrator | 2026-03-19 01:53:31.627481 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:53:31.627492 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 01:53:31.627523 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627535 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627546 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627557 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627568 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627578 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:53:31.627589 | orchestrator | 2026-03-19 01:53:31.627600 | orchestrator | 2026-03-19 01:53:31.627610 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:53:31.627621 | orchestrator | Thursday 19 March 2026 01:53:31 +0000 (0:00:01.835) 0:00:24.124 ******** 2026-03-19 01:53:31.627637 | orchestrator | =============================================================================== 2026-03-19 01:53:31.627655 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.07s 2026-03-19 01:53:31.627673 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.17s 2026-03-19 01:53:31.627690 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2026-03-19 01:53:31.627701 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-03-19 01:53:31.627712 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2026-03-19 01:53:31.627723 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.11s 2026-03-19 01:53:31.627734 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.07s 2026-03-19 01:53:31.627744 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.76s 2026-03-19 01:53:31.627755 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2026-03-19 01:53:31.899370 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-19 01:53:31.946462 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 01:53:31.946590 | orchestrator | + sudo systemctl restart manager.service 2026-03-19 01:53:45.564173 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 01:53:45.564286 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-19 01:53:45.564304 | orchestrator | + local max_attempts=60 2026-03-19 01:53:45.564318 | orchestrator | + local name=ceph-ansible 2026-03-19 01:53:45.564330 | orchestrator | + local attempt_num=1 2026-03-19 01:53:45.564341 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:53:45.607848 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:53:45.607969 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:53:45.607986 | orchestrator | + sleep 5 2026-03-19 01:53:50.612653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:53:50.638626 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:53:50.638733 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:53:50.638756 | orchestrator | + sleep 5 2026-03-19 01:53:55.641919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:53:55.682775 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:53:55.682902 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:53:55.682926 | orchestrator | + sleep 5 2026-03-19 01:54:00.687372 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:00.723510 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:00.723612 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:00.723625 | orchestrator | + sleep 5 2026-03-19 01:54:05.727608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:05.771897 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:05.771977 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:05.771984 | orchestrator | + sleep 5 2026-03-19 01:54:10.776778 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:10.816647 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:10.816758 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:10.816773 | orchestrator | + sleep 5 2026-03-19 01:54:15.820607 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:15.856701 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:15.856791 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:15.856801 | orchestrator | + sleep 5 2026-03-19 01:54:20.860608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:20.890934 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:20.891036 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:20.891050 | orchestrator | + sleep 5 2026-03-19 01:54:25.894819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:25.913536 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:25.913610 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:25.913616 | orchestrator | + sleep 5 2026-03-19 01:54:30.917003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:30.950879 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:30.951032 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:30.951059 | orchestrator | + sleep 5 2026-03-19 01:54:35.955481 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:35.988764 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:35.988863 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:35.988877 | orchestrator | + sleep 5 2026-03-19 01:54:40.991588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:41.020429 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:41.020544 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:41.020570 | orchestrator | + sleep 5 2026-03-19 01:54:46.025311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:46.065347 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:46.065444 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-19 01:54:46.065457 | orchestrator | + sleep 5 2026-03-19 01:54:51.070455 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-19 01:54:51.110373 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:51.110494 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-19 01:54:51.110511 | orchestrator | + local max_attempts=60 2026-03-19 01:54:51.110523 | orchestrator | + local name=kolla-ansible 2026-03-19 01:54:51.110533 | orchestrator | + local attempt_num=1 2026-03-19 01:54:51.111029 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-19 01:54:51.148451 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:51.148558 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-19 01:54:51.148575 | orchestrator | + local max_attempts=60 2026-03-19 01:54:51.148588 | orchestrator | + local name=osism-ansible 2026-03-19 01:54:51.148639 | orchestrator | + local attempt_num=1 2026-03-19 01:54:51.148902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-19 01:54:51.184883 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 01:54:51.184984 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 01:54:51.184998 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-19 01:54:51.345741 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-19 01:54:51.507567 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-19 01:54:51.638894 | orchestrator | ARA in osism-ansible already disabled. 2026-03-19 01:54:51.788987 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-19 01:54:51.789113 | orchestrator | + osism apply gather-facts 2026-03-19 01:55:03.832589 | orchestrator | 2026-03-19 01:55:03 | INFO  | Task d08894ed-249f-445d-896c-67d95af9f567 (gather-facts) was prepared for execution. 2026-03-19 01:55:03.832717 | orchestrator | 2026-03-19 01:55:03 | INFO  | It takes a moment until task d08894ed-249f-445d-896c-67d95af9f567 (gather-facts) has been started and output is visible here. 2026-03-19 01:55:17.379810 | orchestrator | 2026-03-19 01:55:17.379897 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 01:55:17.379903 | orchestrator | 2026-03-19 01:55:17.379908 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 01:55:17.379912 | orchestrator | Thursday 19 March 2026 01:55:07 +0000 (0:00:00.190) 0:00:00.190 ******** 2026-03-19 01:55:17.379917 | orchestrator | ok: [testbed-manager] 2026-03-19 01:55:17.379922 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:55:17.379927 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:55:17.379930 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:55:17.379934 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:55:17.379938 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:55:17.379942 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:55:17.379945 | orchestrator | 2026-03-19 01:55:17.379949 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 01:55:17.379953 | orchestrator | 2026-03-19 01:55:17.379957 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 01:55:17.379961 | orchestrator | Thursday 19 March 2026 01:55:16 +0000 (0:00:08.792) 0:00:08.982 ******** 2026-03-19 01:55:17.379964 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:55:17.379969 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:55:17.379973 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:55:17.379976 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:55:17.379980 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:55:17.379984 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:55:17.379987 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:55:17.379991 | orchestrator | 2026-03-19 01:55:17.379995 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:55:17.379999 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380004 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380008 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380012 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380016 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380019 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380023 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 01:55:17.380052 | orchestrator | 2026-03-19 01:55:17.380056 | orchestrator | 2026-03-19 01:55:17.380060 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:55:17.380064 | orchestrator | Thursday 19 March 2026 01:55:17 +0000 (0:00:00.523) 0:00:09.506 ******** 2026-03-19 01:55:17.380067 | orchestrator | =============================================================================== 2026-03-19 01:55:17.380071 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.79s 2026-03-19 01:55:17.380075 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-19 01:55:17.670597 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-19 01:55:17.688817 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-19 01:55:17.709107 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-19 01:55:17.723263 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-19 01:55:17.735738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-19 01:55:17.749858 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-19 01:55:17.766151 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-19 01:55:17.782957 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-19 01:55:17.796047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-19 01:55:17.806782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-19 01:55:17.824550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-19 01:55:17.838101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-19 01:55:17.857629 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-19 01:55:17.869504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-19 01:55:17.886909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-19 01:55:17.901950 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-19 01:55:17.920101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-19 01:55:17.938341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-19 01:55:17.952179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-19 01:55:17.970207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-19 01:55:17.989314 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-19 01:55:18.004356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-19 01:55:18.022821 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-19 01:55:18.037800 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-19 01:55:18.280444 | orchestrator | ok: Runtime: 0:24:52.165588 2026-03-19 01:55:18.389422 | 2026-03-19 01:55:18.389570 | TASK [Deploy services] 2026-03-19 01:55:19.102367 | orchestrator | 2026-03-19 01:55:19.102506 | orchestrator | # DEPLOY SERVICES 2026-03-19 01:55:19.102518 | orchestrator | 2026-03-19 01:55:19.102524 | orchestrator | + set -e 2026-03-19 01:55:19.102529 | orchestrator | + echo 2026-03-19 01:55:19.102534 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-19 01:55:19.102540 | orchestrator | + echo 2026-03-19 01:55:19.102561 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 01:55:19.102570 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 01:55:19.102577 | orchestrator | ++ INTERACTIVE=false 2026-03-19 01:55:19.102582 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 01:55:19.102590 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 01:55:19.102594 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 01:55:19.102600 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 01:55:19.102604 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 01:55:19.102611 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 01:55:19.102615 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 01:55:19.102621 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 01:55:19.102627 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 01:55:19.102636 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 01:55:19.102640 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 01:55:19.102644 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 01:55:19.102648 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 01:55:19.102652 | orchestrator | ++ export ARA=false 2026-03-19 01:55:19.102665 | orchestrator | ++ ARA=false 2026-03-19 01:55:19.102669 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 01:55:19.102673 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 01:55:19.102677 | orchestrator | ++ export TEMPEST=false 2026-03-19 01:55:19.102681 | orchestrator | ++ TEMPEST=false 2026-03-19 01:55:19.102684 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 01:55:19.102688 | orchestrator | ++ IS_ZUUL=true 2026-03-19 01:55:19.102692 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:55:19.102696 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:55:19.102700 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 01:55:19.102703 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 01:55:19.102707 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 01:55:19.102711 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 01:55:19.102715 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 01:55:19.102718 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 01:55:19.102722 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 01:55:19.102729 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 01:55:19.102733 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-19 01:55:19.111514 | orchestrator | + set -e 2026-03-19 01:55:19.112646 | orchestrator | 2026-03-19 01:55:19.112719 | orchestrator | # PULL IMAGES 2026-03-19 01:55:19.112752 | orchestrator | 2026-03-19 01:55:19.112772 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 01:55:19.112793 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 01:55:19.112809 | orchestrator | ++ INTERACTIVE=false 2026-03-19 01:55:19.112826 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 01:55:19.112842 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 01:55:19.112857 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 01:55:19.112873 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 01:55:19.112889 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 01:55:19.112905 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 01:55:19.112935 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 01:55:19.112953 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 01:55:19.112968 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 01:55:19.112984 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 01:55:19.113001 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 01:55:19.113019 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 01:55:19.113034 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 01:55:19.113051 | orchestrator | ++ export ARA=false 2026-03-19 01:55:19.113069 | orchestrator | ++ ARA=false 2026-03-19 01:55:19.113090 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 01:55:19.113107 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 01:55:19.113123 | orchestrator | ++ export TEMPEST=false 2026-03-19 01:55:19.113140 | orchestrator | ++ TEMPEST=false 2026-03-19 01:55:19.113157 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 01:55:19.113174 | orchestrator | ++ IS_ZUUL=true 2026-03-19 01:55:19.113185 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:55:19.113195 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:55:19.113204 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 01:55:19.113214 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 01:55:19.113224 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 01:55:19.113233 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 01:55:19.113337 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 01:55:19.113359 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 01:55:19.113377 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 01:55:19.113394 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 01:55:19.113410 | orchestrator | + echo 2026-03-19 01:55:19.113427 | orchestrator | + echo '# PULL IMAGES' 2026-03-19 01:55:19.113445 | orchestrator | + echo 2026-03-19 01:55:19.113475 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-19 01:55:19.163868 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 01:55:19.163995 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-19 01:55:20.883584 | orchestrator | 2026-03-19 01:55:20 | INFO  | Trying to run play pull-images in environment custom 2026-03-19 01:55:31.003871 | orchestrator | 2026-03-19 01:55:31 | INFO  | Task b318d7fe-e7f1-49ab-a18d-3d3e4ffc701e (pull-images) was prepared for execution. 2026-03-19 01:55:31.004096 | orchestrator | 2026-03-19 01:55:31 | INFO  | Task b318d7fe-e7f1-49ab-a18d-3d3e4ffc701e is running in background. No more output. Check ARA for logs. 2026-03-19 01:55:31.270170 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-19 01:55:43.374072 | orchestrator | 2026-03-19 01:55:43 | INFO  | Task d2f81597-2a99-44d3-993f-e154e1800545 (cgit) was prepared for execution. 2026-03-19 01:55:43.374209 | orchestrator | 2026-03-19 01:55:43 | INFO  | Task d2f81597-2a99-44d3-993f-e154e1800545 is running in background. No more output. Check ARA for logs. 2026-03-19 01:55:56.516714 | orchestrator | 2026-03-19 01:55:56 | INFO  | Task 8677d3af-340f-403e-9c7f-3136e0e99d8d (dotfiles) was prepared for execution. 2026-03-19 01:55:56.516845 | orchestrator | 2026-03-19 01:55:56 | INFO  | Task 8677d3af-340f-403e-9c7f-3136e0e99d8d is running in background. No more output. Check ARA for logs. 2026-03-19 01:56:08.863430 | orchestrator | 2026-03-19 01:56:08 | INFO  | Task 7086b581-7f5a-4207-b27d-1821b03779c8 (homer) was prepared for execution. 2026-03-19 01:56:08.863544 | orchestrator | 2026-03-19 01:56:08 | INFO  | Task 7086b581-7f5a-4207-b27d-1821b03779c8 is running in background. No more output. Check ARA for logs. 2026-03-19 01:56:21.222785 | orchestrator | 2026-03-19 01:56:21 | INFO  | Task 64fe5ee1-0702-4030-ba86-67bb4d37d3ef (phpmyadmin) was prepared for execution. 2026-03-19 01:56:21.222915 | orchestrator | 2026-03-19 01:56:21 | INFO  | Task 64fe5ee1-0702-4030-ba86-67bb4d37d3ef is running in background. No more output. Check ARA for logs. 2026-03-19 01:56:33.404011 | orchestrator | 2026-03-19 01:56:33 | INFO  | Task b2e1a4b8-7b68-4745-94f7-922c0e6ec132 (sosreport) was prepared for execution. 2026-03-19 01:56:33.404117 | orchestrator | 2026-03-19 01:56:33 | INFO  | Task b2e1a4b8-7b68-4745-94f7-922c0e6ec132 is running in background. No more output. Check ARA for logs. 2026-03-19 01:56:33.676669 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-19 01:56:33.683668 | orchestrator | + set -e 2026-03-19 01:56:33.683772 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 01:56:33.683789 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 01:56:33.683803 | orchestrator | ++ INTERACTIVE=false 2026-03-19 01:56:33.683816 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 01:56:33.683828 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 01:56:33.683839 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 01:56:33.683850 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 01:56:33.683862 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 01:56:33.683873 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 01:56:33.683884 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 01:56:33.683895 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 01:56:33.683906 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 01:56:33.683917 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 01:56:33.683929 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 01:56:33.683952 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 01:56:33.683963 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 01:56:33.683974 | orchestrator | ++ export ARA=false 2026-03-19 01:56:33.683986 | orchestrator | ++ ARA=false 2026-03-19 01:56:33.683997 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 01:56:33.684042 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 01:56:33.684054 | orchestrator | ++ export TEMPEST=false 2026-03-19 01:56:33.684064 | orchestrator | ++ TEMPEST=false 2026-03-19 01:56:33.684075 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 01:56:33.684086 | orchestrator | ++ IS_ZUUL=true 2026-03-19 01:56:33.684114 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:56:33.684131 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 01:56:33.684143 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 01:56:33.684284 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 01:56:33.684395 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 01:56:33.684451 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 01:56:33.684472 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 01:56:33.684492 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 01:56:33.684509 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 01:56:33.684520 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 01:56:33.684532 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-19 01:56:33.733034 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 01:56:33.733146 | orchestrator | + osism apply frr 2026-03-19 01:56:46.143987 | orchestrator | 2026-03-19 01:56:46 | INFO  | Task 525457eb-8ec8-484c-b8c9-ef5a47d78ec2 (frr) was prepared for execution. 2026-03-19 01:56:46.144106 | orchestrator | 2026-03-19 01:56:46 | INFO  | It takes a moment until task 525457eb-8ec8-484c-b8c9-ef5a47d78ec2 (frr) has been started and output is visible here. 2026-03-19 01:57:14.364561 | orchestrator | 2026-03-19 01:57:14.364652 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-19 01:57:14.364660 | orchestrator | 2026-03-19 01:57:14.364664 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-19 01:57:14.364674 | orchestrator | Thursday 19 March 2026 01:56:50 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-03-19 01:57:14.364679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 01:57:14.364684 | orchestrator | 2026-03-19 01:57:14.364688 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-19 01:57:14.364692 | orchestrator | Thursday 19 March 2026 01:56:51 +0000 (0:00:00.258) 0:00:00.572 ******** 2026-03-19 01:57:14.364696 | orchestrator | changed: [testbed-manager] 2026-03-19 01:57:14.364701 | orchestrator | 2026-03-19 01:57:14.364705 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-19 01:57:14.364710 | orchestrator | Thursday 19 March 2026 01:56:52 +0000 (0:00:01.360) 0:00:01.933 ******** 2026-03-19 01:57:14.364714 | orchestrator | changed: [testbed-manager] 2026-03-19 01:57:14.364718 | orchestrator | 2026-03-19 01:57:14.364722 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-19 01:57:14.364726 | orchestrator | Thursday 19 March 2026 01:57:03 +0000 (0:00:11.276) 0:00:13.209 ******** 2026-03-19 01:57:14.364730 | orchestrator | ok: [testbed-manager] 2026-03-19 01:57:14.364734 | orchestrator | 2026-03-19 01:57:14.364738 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-19 01:57:14.364742 | orchestrator | Thursday 19 March 2026 01:57:04 +0000 (0:00:01.038) 0:00:14.248 ******** 2026-03-19 01:57:14.364746 | orchestrator | changed: [testbed-manager] 2026-03-19 01:57:14.364750 | orchestrator | 2026-03-19 01:57:14.364753 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-19 01:57:14.364758 | orchestrator | Thursday 19 March 2026 01:57:05 +0000 (0:00:00.918) 0:00:15.166 ******** 2026-03-19 01:57:14.364765 | orchestrator | ok: [testbed-manager] 2026-03-19 01:57:14.364771 | orchestrator | 2026-03-19 01:57:14.364777 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-19 01:57:14.364784 | orchestrator | Thursday 19 March 2026 01:57:07 +0000 (0:00:01.313) 0:00:16.480 ******** 2026-03-19 01:57:14.364790 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:57:14.364796 | orchestrator | 2026-03-19 01:57:14.364802 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-19 01:57:14.364809 | orchestrator | Thursday 19 March 2026 01:57:07 +0000 (0:00:00.137) 0:00:16.618 ******** 2026-03-19 01:57:14.364836 | orchestrator | skipping: [testbed-manager] 2026-03-19 01:57:14.364841 | orchestrator | 2026-03-19 01:57:14.364845 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-19 01:57:14.364849 | orchestrator | Thursday 19 March 2026 01:57:07 +0000 (0:00:00.185) 0:00:16.803 ******** 2026-03-19 01:57:14.364853 | orchestrator | changed: [testbed-manager] 2026-03-19 01:57:14.364857 | orchestrator | 2026-03-19 01:57:14.364861 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-19 01:57:14.364864 | orchestrator | Thursday 19 March 2026 01:57:08 +0000 (0:00:01.079) 0:00:17.883 ******** 2026-03-19 01:57:14.364868 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-19 01:57:14.364872 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-19 01:57:14.364877 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-19 01:57:14.364881 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-19 01:57:14.364885 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-19 01:57:14.364889 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-19 01:57:14.364892 | orchestrator | 2026-03-19 01:57:14.364896 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-19 01:57:14.364900 | orchestrator | Thursday 19 March 2026 01:57:11 +0000 (0:00:02.675) 0:00:20.558 ******** 2026-03-19 01:57:14.364904 | orchestrator | ok: [testbed-manager] 2026-03-19 01:57:14.364907 | orchestrator | 2026-03-19 01:57:14.364911 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-19 01:57:14.364915 | orchestrator | Thursday 19 March 2026 01:57:12 +0000 (0:00:01.806) 0:00:22.364 ******** 2026-03-19 01:57:14.364919 | orchestrator | changed: [testbed-manager] 2026-03-19 01:57:14.364922 | orchestrator | 2026-03-19 01:57:14.364926 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 01:57:14.364930 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 01:57:14.364934 | orchestrator | 2026-03-19 01:57:14.364938 | orchestrator | 2026-03-19 01:57:14.364946 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 01:57:14.364949 | orchestrator | Thursday 19 March 2026 01:57:14 +0000 (0:00:01.241) 0:00:23.606 ******** 2026-03-19 01:57:14.364953 | orchestrator | =============================================================================== 2026-03-19 01:57:14.364957 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.28s 2026-03-19 01:57:14.364961 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.68s 2026-03-19 01:57:14.364964 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.81s 2026-03-19 01:57:14.364968 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.36s 2026-03-19 01:57:14.364972 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.31s 2026-03-19 01:57:14.364987 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.24s 2026-03-19 01:57:14.364991 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.08s 2026-03-19 01:57:14.364994 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.04s 2026-03-19 01:57:14.364998 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-03-19 01:57:14.365002 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-03-19 01:57:14.365005 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-03-19 01:57:14.365009 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-19 01:57:14.597712 | orchestrator | + osism apply kubernetes 2026-03-19 01:57:16.162686 | orchestrator | 2026-03-19 01:57:16 | INFO  | Task 5b81f994-4830-416a-bad7-dddeb3d6c09d (kubernetes) was prepared for execution. 2026-03-19 01:57:16.162809 | orchestrator | 2026-03-19 01:57:16 | INFO  | It takes a moment until task 5b81f994-4830-416a-bad7-dddeb3d6c09d (kubernetes) has been started and output is visible here. 2026-03-19 01:57:38.527145 | orchestrator | 2026-03-19 01:57:38.527268 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-19 01:57:38.527279 | orchestrator | 2026-03-19 01:57:38.527286 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-19 01:57:38.527294 | orchestrator | Thursday 19 March 2026 01:57:20 +0000 (0:00:00.139) 0:00:00.139 ******** 2026-03-19 01:57:38.527301 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:57:38.527308 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:57:38.527315 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:57:38.527322 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:57:38.527328 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:57:38.527334 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:57:38.527340 | orchestrator | 2026-03-19 01:57:38.527347 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-19 01:57:38.527353 | orchestrator | Thursday 19 March 2026 01:57:20 +0000 (0:00:00.578) 0:00:00.718 ******** 2026-03-19 01:57:38.527359 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.527366 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.527377 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.527387 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.527400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.527412 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.527424 | orchestrator | 2026-03-19 01:57:38.527433 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-19 01:57:38.527444 | orchestrator | Thursday 19 March 2026 01:57:21 +0000 (0:00:00.485) 0:00:01.203 ******** 2026-03-19 01:57:38.527454 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.527463 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.527472 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.527481 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.527491 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.527500 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.527565 | orchestrator | 2026-03-19 01:57:38.527576 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-19 01:57:38.527586 | orchestrator | Thursday 19 March 2026 01:57:21 +0000 (0:00:00.573) 0:00:01.776 ******** 2026-03-19 01:57:38.527597 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:57:38.527607 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:57:38.527617 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:57:38.527630 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:57:38.527640 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:57:38.527647 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:57:38.527653 | orchestrator | 2026-03-19 01:57:38.527660 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-19 01:57:38.527668 | orchestrator | Thursday 19 March 2026 01:57:23 +0000 (0:00:01.610) 0:00:03.387 ******** 2026-03-19 01:57:38.527675 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:57:38.527684 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:57:38.527693 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:57:38.527702 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:57:38.527714 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:57:38.527733 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:57:38.527746 | orchestrator | 2026-03-19 01:57:38.527758 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-19 01:57:38.527770 | orchestrator | Thursday 19 March 2026 01:57:24 +0000 (0:00:01.157) 0:00:04.544 ******** 2026-03-19 01:57:38.527781 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:57:38.527822 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:57:38.527834 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:57:38.527845 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:57:38.527856 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:57:38.527868 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:57:38.527879 | orchestrator | 2026-03-19 01:57:38.527901 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-19 01:57:38.527913 | orchestrator | Thursday 19 March 2026 01:57:26 +0000 (0:00:01.628) 0:00:06.172 ******** 2026-03-19 01:57:38.527924 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.527935 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.527947 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.527958 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.527970 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.527982 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.527994 | orchestrator | 2026-03-19 01:57:38.528007 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-19 01:57:38.528020 | orchestrator | Thursday 19 March 2026 01:57:26 +0000 (0:00:00.571) 0:00:06.744 ******** 2026-03-19 01:57:38.528033 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528046 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528054 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528061 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528068 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528076 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528083 | orchestrator | 2026-03-19 01:57:38.528090 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-19 01:57:38.528097 | orchestrator | Thursday 19 March 2026 01:57:27 +0000 (0:00:00.723) 0:00:07.467 ******** 2026-03-19 01:57:38.528104 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528111 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528118 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528125 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528132 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528140 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528147 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528154 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528161 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528168 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528193 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528201 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528208 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528215 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528222 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528229 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 01:57:38.528236 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 01:57:38.528244 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528251 | orchestrator | 2026-03-19 01:57:38.528258 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-19 01:57:38.528265 | orchestrator | Thursday 19 March 2026 01:57:28 +0000 (0:00:00.601) 0:00:08.069 ******** 2026-03-19 01:57:38.528272 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528279 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528286 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528304 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528311 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528318 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528325 | orchestrator | 2026-03-19 01:57:38.528332 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-19 01:57:38.528340 | orchestrator | Thursday 19 March 2026 01:57:29 +0000 (0:00:01.110) 0:00:09.179 ******** 2026-03-19 01:57:38.528348 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:57:38.528355 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:57:38.528362 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:57:38.528369 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:57:38.528376 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:57:38.528383 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:57:38.528390 | orchestrator | 2026-03-19 01:57:38.528397 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-19 01:57:38.528405 | orchestrator | Thursday 19 March 2026 01:57:29 +0000 (0:00:00.774) 0:00:09.954 ******** 2026-03-19 01:57:38.528412 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:57:38.528419 | orchestrator | changed: [testbed-node-4] 2026-03-19 01:57:38.528426 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:57:38.528433 | orchestrator | changed: [testbed-node-5] 2026-03-19 01:57:38.528440 | orchestrator | changed: [testbed-node-3] 2026-03-19 01:57:38.528447 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:57:38.528454 | orchestrator | 2026-03-19 01:57:38.528461 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-19 01:57:38.528469 | orchestrator | Thursday 19 March 2026 01:57:35 +0000 (0:00:05.199) 0:00:15.154 ******** 2026-03-19 01:57:38.528476 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528487 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528495 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528502 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528584 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528593 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528601 | orchestrator | 2026-03-19 01:57:38.528608 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-19 01:57:38.528615 | orchestrator | Thursday 19 March 2026 01:57:35 +0000 (0:00:00.823) 0:00:15.978 ******** 2026-03-19 01:57:38.528622 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528629 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528636 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528643 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528650 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528657 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528664 | orchestrator | 2026-03-19 01:57:38.528672 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-19 01:57:38.528680 | orchestrator | Thursday 19 March 2026 01:57:37 +0000 (0:00:01.145) 0:00:17.123 ******** 2026-03-19 01:57:38.528688 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528695 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528702 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528709 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528716 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528723 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528730 | orchestrator | 2026-03-19 01:57:38.528737 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-19 01:57:38.528744 | orchestrator | Thursday 19 March 2026 01:57:37 +0000 (0:00:00.541) 0:00:17.665 ******** 2026-03-19 01:57:38.528752 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-19 01:57:38.528765 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-19 01:57:38.528772 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:57:38.528779 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-19 01:57:38.528793 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-19 01:57:38.528800 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:57:38.528807 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-19 01:57:38.528814 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-19 01:57:38.528821 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:57:38.528828 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-19 01:57:38.528835 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-19 01:57:38.528842 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:57:38.528850 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-19 01:57:38.528857 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-19 01:57:38.528864 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:57:38.528871 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-19 01:57:38.528878 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-19 01:57:38.528885 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:57:38.528892 | orchestrator | 2026-03-19 01:57:38.528899 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-19 01:57:38.528913 | orchestrator | Thursday 19 March 2026 01:57:38 +0000 (0:00:00.847) 0:00:18.513 ******** 2026-03-19 01:58:52.056484 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:58:52.056611 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:58:52.056697 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:58:52.056717 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.056735 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.056753 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.056772 | orchestrator | 2026-03-19 01:58:52.056791 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-19 01:58:52.056810 | orchestrator | Thursday 19 March 2026 01:57:39 +0000 (0:00:00.512) 0:00:19.025 ******** 2026-03-19 01:58:52.056828 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:58:52.056845 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:58:52.056862 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:58:52.056880 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.056897 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.056915 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.056932 | orchestrator | 2026-03-19 01:58:52.056950 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-19 01:58:52.056970 | orchestrator | 2026-03-19 01:58:52.056989 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-19 01:58:52.057009 | orchestrator | Thursday 19 March 2026 01:57:40 +0000 (0:00:01.088) 0:00:20.113 ******** 2026-03-19 01:58:52.057029 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.057050 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.057069 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.057082 | orchestrator | 2026-03-19 01:58:52.057095 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-19 01:58:52.057108 | orchestrator | Thursday 19 March 2026 01:57:41 +0000 (0:00:00.952) 0:00:21.066 ******** 2026-03-19 01:58:52.057121 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.057134 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.057146 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.057159 | orchestrator | 2026-03-19 01:58:52.057171 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-19 01:58:52.057184 | orchestrator | Thursday 19 March 2026 01:57:42 +0000 (0:00:01.268) 0:00:22.334 ******** 2026-03-19 01:58:52.057197 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.057210 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.057220 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.057232 | orchestrator | 2026-03-19 01:58:52.057243 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-19 01:58:52.057254 | orchestrator | Thursday 19 March 2026 01:57:43 +0000 (0:00:00.916) 0:00:23.251 ******** 2026-03-19 01:58:52.057291 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.057303 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.057314 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.057324 | orchestrator | 2026-03-19 01:58:52.057335 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-19 01:58:52.057346 | orchestrator | Thursday 19 March 2026 01:57:43 +0000 (0:00:00.648) 0:00:23.899 ******** 2026-03-19 01:58:52.057357 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.057367 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.057378 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.057389 | orchestrator | 2026-03-19 01:58:52.057400 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-19 01:58:52.057430 | orchestrator | Thursday 19 March 2026 01:57:44 +0000 (0:00:00.287) 0:00:24.187 ******** 2026-03-19 01:58:52.057442 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.057453 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:58:52.057463 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:58:52.057474 | orchestrator | 2026-03-19 01:58:52.057485 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-19 01:58:52.057495 | orchestrator | Thursday 19 March 2026 01:57:45 +0000 (0:00:00.857) 0:00:25.044 ******** 2026-03-19 01:58:52.057506 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.057517 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:58:52.057528 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:58:52.057538 | orchestrator | 2026-03-19 01:58:52.057549 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-19 01:58:52.057560 | orchestrator | Thursday 19 March 2026 01:57:46 +0000 (0:00:01.331) 0:00:26.376 ******** 2026-03-19 01:58:52.057571 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 01:58:52.057581 | orchestrator | 2026-03-19 01:58:52.057592 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-19 01:58:52.057603 | orchestrator | Thursday 19 March 2026 01:57:46 +0000 (0:00:00.488) 0:00:26.864 ******** 2026-03-19 01:58:52.057614 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.057649 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.057661 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.057672 | orchestrator | 2026-03-19 01:58:52.057682 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-19 01:58:52.057693 | orchestrator | Thursday 19 March 2026 01:57:48 +0000 (0:00:01.815) 0:00:28.680 ******** 2026-03-19 01:58:52.057704 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.057715 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.057726 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.057736 | orchestrator | 2026-03-19 01:58:52.057747 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-19 01:58:52.057758 | orchestrator | Thursday 19 March 2026 01:57:49 +0000 (0:00:00.488) 0:00:29.168 ******** 2026-03-19 01:58:52.057775 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.057792 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.057810 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.057828 | orchestrator | 2026-03-19 01:58:52.057848 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-19 01:58:52.057867 | orchestrator | Thursday 19 March 2026 01:57:49 +0000 (0:00:00.703) 0:00:29.872 ******** 2026-03-19 01:58:52.057884 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.057903 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.057920 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.057939 | orchestrator | 2026-03-19 01:58:52.057957 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-19 01:58:52.057996 | orchestrator | Thursday 19 March 2026 01:57:51 +0000 (0:00:01.204) 0:00:31.077 ******** 2026-03-19 01:58:52.058008 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.058087 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.058103 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.058122 | orchestrator | 2026-03-19 01:58:52.058143 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-19 01:58:52.058163 | orchestrator | Thursday 19 March 2026 01:57:51 +0000 (0:00:00.464) 0:00:31.541 ******** 2026-03-19 01:58:52.058183 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.058202 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.058222 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.058240 | orchestrator | 2026-03-19 01:58:52.058257 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-19 01:58:52.058269 | orchestrator | Thursday 19 March 2026 01:57:51 +0000 (0:00:00.308) 0:00:31.850 ******** 2026-03-19 01:58:52.058279 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:58:52.058290 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:58:52.058307 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:58:52.058326 | orchestrator | 2026-03-19 01:58:52.058352 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-19 01:58:52.058372 | orchestrator | Thursday 19 March 2026 01:57:53 +0000 (0:00:01.329) 0:00:33.179 ******** 2026-03-19 01:58:52.058389 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.058408 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.058426 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.058443 | orchestrator | 2026-03-19 01:58:52.058454 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-19 01:58:52.058465 | orchestrator | Thursday 19 March 2026 01:57:56 +0000 (0:00:02.979) 0:00:36.159 ******** 2026-03-19 01:58:52.058475 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.058486 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.058497 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.058512 | orchestrator | 2026-03-19 01:58:52.058523 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-19 01:58:52.058535 | orchestrator | Thursday 19 March 2026 01:57:56 +0000 (0:00:00.437) 0:00:36.597 ******** 2026-03-19 01:58:52.058546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 01:58:52.058559 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 01:58:52.058569 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 01:58:52.058580 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 01:58:52.058591 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 01:58:52.058602 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 01:58:52.058613 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 01:58:52.058689 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 01:58:52.058706 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-19 01:58:52.058723 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 01:58:52.058740 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 01:58:52.058770 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-19 01:58:52.058787 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-19 01:58:52.058803 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-19 01:58:52.058821 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-19 01:58:52.058838 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:58:52.058855 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:58:52.058872 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:58:52.058889 | orchestrator | 2026-03-19 01:58:52.058915 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-19 01:58:52.058933 | orchestrator | Thursday 19 March 2026 01:58:50 +0000 (0:00:54.170) 0:01:30.767 ******** 2026-03-19 01:58:52.058952 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:58:52.058970 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:58:52.058987 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:58:52.058998 | orchestrator | 2026-03-19 01:58:52.059009 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-19 01:58:52.059020 | orchestrator | Thursday 19 March 2026 01:58:51 +0000 (0:00:00.316) 0:01:31.084 ******** 2026-03-19 01:58:52.059044 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.126848 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.126988 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127005 | orchestrator | 2026-03-19 01:59:34.127019 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-19 01:59:34.127032 | orchestrator | Thursday 19 March 2026 01:58:52 +0000 (0:00:00.960) 0:01:32.044 ******** 2026-03-19 01:59:34.127043 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127054 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127066 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127077 | orchestrator | 2026-03-19 01:59:34.127088 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-19 01:59:34.127099 | orchestrator | Thursday 19 March 2026 01:58:53 +0000 (0:00:01.212) 0:01:33.257 ******** 2026-03-19 01:59:34.127110 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127121 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127131 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127142 | orchestrator | 2026-03-19 01:59:34.127153 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-19 01:59:34.127164 | orchestrator | Thursday 19 March 2026 01:59:19 +0000 (0:00:26.498) 0:01:59.755 ******** 2026-03-19 01:59:34.127175 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.127187 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.127198 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.127208 | orchestrator | 2026-03-19 01:59:34.127219 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-19 01:59:34.127230 | orchestrator | Thursday 19 March 2026 01:59:20 +0000 (0:00:00.672) 0:02:00.428 ******** 2026-03-19 01:59:34.127241 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.127252 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.127263 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.127274 | orchestrator | 2026-03-19 01:59:34.127287 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-19 01:59:34.127300 | orchestrator | Thursday 19 March 2026 01:59:21 +0000 (0:00:00.684) 0:02:01.112 ******** 2026-03-19 01:59:34.127313 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127325 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127336 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127349 | orchestrator | 2026-03-19 01:59:34.127361 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-19 01:59:34.127402 | orchestrator | Thursday 19 March 2026 01:59:21 +0000 (0:00:00.635) 0:02:01.748 ******** 2026-03-19 01:59:34.127415 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.127427 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.127439 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.127450 | orchestrator | 2026-03-19 01:59:34.127461 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-19 01:59:34.127472 | orchestrator | Thursday 19 March 2026 01:59:22 +0000 (0:00:00.824) 0:02:02.573 ******** 2026-03-19 01:59:34.127483 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.127493 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.127504 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.127515 | orchestrator | 2026-03-19 01:59:34.127526 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-19 01:59:34.127537 | orchestrator | Thursday 19 March 2026 01:59:22 +0000 (0:00:00.286) 0:02:02.859 ******** 2026-03-19 01:59:34.127547 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127558 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127569 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127580 | orchestrator | 2026-03-19 01:59:34.127590 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-19 01:59:34.127601 | orchestrator | Thursday 19 March 2026 01:59:23 +0000 (0:00:00.642) 0:02:03.502 ******** 2026-03-19 01:59:34.127612 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127623 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127634 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127645 | orchestrator | 2026-03-19 01:59:34.127656 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-19 01:59:34.127667 | orchestrator | Thursday 19 March 2026 01:59:24 +0000 (0:00:00.635) 0:02:04.137 ******** 2026-03-19 01:59:34.127763 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127777 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127788 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127798 | orchestrator | 2026-03-19 01:59:34.127810 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-19 01:59:34.127821 | orchestrator | Thursday 19 March 2026 01:59:25 +0000 (0:00:00.975) 0:02:05.113 ******** 2026-03-19 01:59:34.127834 | orchestrator | changed: [testbed-node-0] 2026-03-19 01:59:34.127845 | orchestrator | changed: [testbed-node-1] 2026-03-19 01:59:34.127855 | orchestrator | changed: [testbed-node-2] 2026-03-19 01:59:34.127866 | orchestrator | 2026-03-19 01:59:34.127877 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-19 01:59:34.127888 | orchestrator | Thursday 19 March 2026 01:59:26 +0000 (0:00:01.039) 0:02:06.152 ******** 2026-03-19 01:59:34.127899 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:59:34.127910 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:59:34.127920 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:59:34.127931 | orchestrator | 2026-03-19 01:59:34.127942 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-19 01:59:34.127953 | orchestrator | Thursday 19 March 2026 01:59:26 +0000 (0:00:00.285) 0:02:06.438 ******** 2026-03-19 01:59:34.127964 | orchestrator | skipping: [testbed-node-0] 2026-03-19 01:59:34.127974 | orchestrator | skipping: [testbed-node-1] 2026-03-19 01:59:34.127985 | orchestrator | skipping: [testbed-node-2] 2026-03-19 01:59:34.127995 | orchestrator | 2026-03-19 01:59:34.128006 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-19 01:59:34.128017 | orchestrator | Thursday 19 March 2026 01:59:26 +0000 (0:00:00.273) 0:02:06.711 ******** 2026-03-19 01:59:34.128028 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.128039 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.128049 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.128060 | orchestrator | 2026-03-19 01:59:34.128071 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-19 01:59:34.128082 | orchestrator | Thursday 19 March 2026 01:59:27 +0000 (0:00:00.599) 0:02:07.311 ******** 2026-03-19 01:59:34.128102 | orchestrator | ok: [testbed-node-0] 2026-03-19 01:59:34.128113 | orchestrator | ok: [testbed-node-1] 2026-03-19 01:59:34.128145 | orchestrator | ok: [testbed-node-2] 2026-03-19 01:59:34.128156 | orchestrator | 2026-03-19 01:59:34.128168 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-19 01:59:34.128180 | orchestrator | Thursday 19 March 2026 01:59:28 +0000 (0:00:00.941) 0:02:08.252 ******** 2026-03-19 01:59:34.128191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 01:59:34.128203 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 01:59:34.128213 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 01:59:34.128224 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 01:59:34.128235 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 01:59:34.128245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 01:59:34.128256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 01:59:34.128267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 01:59:34.128278 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 01:59:34.128288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-19 01:59:34.128299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 01:59:34.128310 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 01:59:34.128321 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-19 01:59:34.128332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 01:59:34.128342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 01:59:34.128353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 01:59:34.128364 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 01:59:34.128375 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 01:59:34.128386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 01:59:34.128396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 01:59:34.128407 | orchestrator | 2026-03-19 01:59:34.128418 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-19 01:59:34.128429 | orchestrator | 2026-03-19 01:59:34.128440 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-19 01:59:34.128454 | orchestrator | Thursday 19 March 2026 01:59:31 +0000 (0:00:03.095) 0:02:11.347 ******** 2026-03-19 01:59:34.128472 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:59:34.128489 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:59:34.128507 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:59:34.128526 | orchestrator | 2026-03-19 01:59:34.128567 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-19 01:59:34.128587 | orchestrator | Thursday 19 March 2026 01:59:31 +0000 (0:00:00.298) 0:02:11.646 ******** 2026-03-19 01:59:34.128599 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:59:34.128610 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:59:34.128621 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:59:34.128639 | orchestrator | 2026-03-19 01:59:34.128650 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-19 01:59:34.128661 | orchestrator | Thursday 19 March 2026 01:59:32 +0000 (0:00:00.816) 0:02:12.462 ******** 2026-03-19 01:59:34.128672 | orchestrator | ok: [testbed-node-3] 2026-03-19 01:59:34.128714 | orchestrator | ok: [testbed-node-4] 2026-03-19 01:59:34.128726 | orchestrator | ok: [testbed-node-5] 2026-03-19 01:59:34.128737 | orchestrator | 2026-03-19 01:59:34.128748 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-19 01:59:34.128759 | orchestrator | Thursday 19 March 2026 01:59:32 +0000 (0:00:00.301) 0:02:12.764 ******** 2026-03-19 01:59:34.128770 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 01:59:34.128781 | orchestrator | 2026-03-19 01:59:34.128792 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-19 01:59:34.128803 | orchestrator | Thursday 19 March 2026 01:59:33 +0000 (0:00:00.445) 0:02:13.209 ******** 2026-03-19 01:59:34.128814 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:59:34.128825 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:59:34.128835 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:59:34.128846 | orchestrator | 2026-03-19 01:59:34.128857 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-19 01:59:34.128868 | orchestrator | Thursday 19 March 2026 01:59:33 +0000 (0:00:00.449) 0:02:13.658 ******** 2026-03-19 01:59:34.128879 | orchestrator | skipping: [testbed-node-3] 2026-03-19 01:59:34.128889 | orchestrator | skipping: [testbed-node-4] 2026-03-19 01:59:34.128900 | orchestrator | skipping: [testbed-node-5] 2026-03-19 01:59:34.128911 | orchestrator | 2026-03-19 01:59:34.128922 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-19 01:59:34.128933 | orchestrator | Thursday 19 March 2026 01:59:33 +0000 (0:00:00.298) 0:02:13.957 ******** 2026-03-19 01:59:34.128951 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:01:09.379119 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:01:09.379229 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:01:09.379237 | orchestrator | 2026-03-19 02:01:09.379243 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-19 02:01:09.379248 | orchestrator | Thursday 19 March 2026 01:59:34 +0000 (0:00:00.294) 0:02:14.252 ******** 2026-03-19 02:01:09.379252 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:01:09.379256 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:01:09.379260 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:01:09.379264 | orchestrator | 2026-03-19 02:01:09.379268 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-19 02:01:09.379272 | orchestrator | Thursday 19 March 2026 01:59:34 +0000 (0:00:00.624) 0:02:14.877 ******** 2026-03-19 02:01:09.379276 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:01:09.379280 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:01:09.379284 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:01:09.379288 | orchestrator | 2026-03-19 02:01:09.379292 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-19 02:01:09.379296 | orchestrator | Thursday 19 March 2026 01:59:36 +0000 (0:00:01.296) 0:02:16.174 ******** 2026-03-19 02:01:09.379299 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:01:09.379303 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:01:09.379307 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:01:09.379311 | orchestrator | 2026-03-19 02:01:09.379315 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-19 02:01:09.379319 | orchestrator | Thursday 19 March 2026 01:59:37 +0000 (0:00:01.292) 0:02:17.466 ******** 2026-03-19 02:01:09.379322 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:01:09.379326 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:01:09.379330 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:01:09.379334 | orchestrator | 2026-03-19 02:01:09.379337 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-19 02:01:09.379359 | orchestrator | 2026-03-19 02:01:09.379363 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-19 02:01:09.379367 | orchestrator | Thursday 19 March 2026 01:59:47 +0000 (0:00:10.144) 0:02:27.611 ******** 2026-03-19 02:01:09.379370 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379375 | orchestrator | 2026-03-19 02:01:09.379379 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-19 02:01:09.379382 | orchestrator | Thursday 19 March 2026 01:59:48 +0000 (0:00:00.820) 0:02:28.431 ******** 2026-03-19 02:01:09.379386 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379390 | orchestrator | 2026-03-19 02:01:09.379394 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 02:01:09.379398 | orchestrator | Thursday 19 March 2026 01:59:49 +0000 (0:00:00.586) 0:02:29.017 ******** 2026-03-19 02:01:09.379402 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 02:01:09.379405 | orchestrator | 2026-03-19 02:01:09.379409 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 02:01:09.379413 | orchestrator | Thursday 19 March 2026 01:59:49 +0000 (0:00:00.519) 0:02:29.537 ******** 2026-03-19 02:01:09.379417 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379421 | orchestrator | 2026-03-19 02:01:09.379424 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-19 02:01:09.379428 | orchestrator | Thursday 19 March 2026 01:59:50 +0000 (0:00:00.844) 0:02:30.381 ******** 2026-03-19 02:01:09.379432 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379436 | orchestrator | 2026-03-19 02:01:09.379439 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-19 02:01:09.379443 | orchestrator | Thursday 19 March 2026 01:59:50 +0000 (0:00:00.569) 0:02:30.951 ******** 2026-03-19 02:01:09.379447 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 02:01:09.379451 | orchestrator | 2026-03-19 02:01:09.379455 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-19 02:01:09.379459 | orchestrator | Thursday 19 March 2026 01:59:52 +0000 (0:00:01.577) 0:02:32.528 ******** 2026-03-19 02:01:09.379462 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 02:01:09.379466 | orchestrator | 2026-03-19 02:01:09.379485 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-19 02:01:09.379492 | orchestrator | Thursday 19 March 2026 01:59:53 +0000 (0:00:00.782) 0:02:33.310 ******** 2026-03-19 02:01:09.379496 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379500 | orchestrator | 2026-03-19 02:01:09.379503 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-19 02:01:09.379507 | orchestrator | Thursday 19 March 2026 01:59:53 +0000 (0:00:00.411) 0:02:33.722 ******** 2026-03-19 02:01:09.379511 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379515 | orchestrator | 2026-03-19 02:01:09.379518 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-19 02:01:09.379522 | orchestrator | 2026-03-19 02:01:09.379526 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-19 02:01:09.379530 | orchestrator | Thursday 19 March 2026 01:59:54 +0000 (0:00:00.431) 0:02:34.153 ******** 2026-03-19 02:01:09.379534 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379537 | orchestrator | 2026-03-19 02:01:09.379541 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-19 02:01:09.379545 | orchestrator | Thursday 19 March 2026 01:59:54 +0000 (0:00:00.141) 0:02:34.294 ******** 2026-03-19 02:01:09.379549 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 02:01:09.379553 | orchestrator | 2026-03-19 02:01:09.379557 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-19 02:01:09.379561 | orchestrator | Thursday 19 March 2026 01:59:54 +0000 (0:00:00.380) 0:02:34.674 ******** 2026-03-19 02:01:09.379565 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379568 | orchestrator | 2026-03-19 02:01:09.379576 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-19 02:01:09.379579 | orchestrator | Thursday 19 March 2026 01:59:55 +0000 (0:00:00.834) 0:02:35.509 ******** 2026-03-19 02:01:09.379583 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379587 | orchestrator | 2026-03-19 02:01:09.379601 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-19 02:01:09.379605 | orchestrator | Thursday 19 March 2026 01:59:57 +0000 (0:00:01.503) 0:02:37.012 ******** 2026-03-19 02:01:09.379609 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379612 | orchestrator | 2026-03-19 02:01:09.379616 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-19 02:01:09.379620 | orchestrator | Thursday 19 March 2026 01:59:57 +0000 (0:00:00.810) 0:02:37.823 ******** 2026-03-19 02:01:09.379624 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379628 | orchestrator | 2026-03-19 02:01:09.379631 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-19 02:01:09.379635 | orchestrator | Thursday 19 March 2026 01:59:58 +0000 (0:00:00.478) 0:02:38.302 ******** 2026-03-19 02:01:09.379639 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379642 | orchestrator | 2026-03-19 02:01:09.379646 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-19 02:01:09.379650 | orchestrator | Thursday 19 March 2026 02:00:05 +0000 (0:00:06.966) 0:02:45.268 ******** 2026-03-19 02:01:09.379654 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:09.379657 | orchestrator | 2026-03-19 02:01:09.379661 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-19 02:01:09.379665 | orchestrator | Thursday 19 March 2026 02:00:17 +0000 (0:00:11.848) 0:02:57.117 ******** 2026-03-19 02:01:09.379669 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:09.379673 | orchestrator | 2026-03-19 02:01:09.379678 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-19 02:01:09.379682 | orchestrator | 2026-03-19 02:01:09.379687 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-19 02:01:09.379691 | orchestrator | Thursday 19 March 2026 02:00:17 +0000 (0:00:00.677) 0:02:57.794 ******** 2026-03-19 02:01:09.379696 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:01:09.379701 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:01:09.379705 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:01:09.379710 | orchestrator | 2026-03-19 02:01:09.379714 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-19 02:01:09.379718 | orchestrator | Thursday 19 March 2026 02:00:18 +0000 (0:00:00.333) 0:02:58.128 ******** 2026-03-19 02:01:09.379723 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379727 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:01:09.379732 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:01:09.379736 | orchestrator | 2026-03-19 02:01:09.379741 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-19 02:01:09.379745 | orchestrator | Thursday 19 March 2026 02:00:18 +0000 (0:00:00.341) 0:02:58.469 ******** 2026-03-19 02:01:09.379749 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:01:09.379754 | orchestrator | 2026-03-19 02:01:09.379759 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-19 02:01:09.379763 | orchestrator | Thursday 19 March 2026 02:00:19 +0000 (0:00:00.680) 0:02:59.150 ******** 2026-03-19 02:01:09.379768 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 02:01:09.379772 | orchestrator | 2026-03-19 02:01:09.379776 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-19 02:01:09.379781 | orchestrator | Thursday 19 March 2026 02:00:19 +0000 (0:00:00.783) 0:02:59.933 ******** 2026-03-19 02:01:09.379785 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:01:09.379790 | orchestrator | 2026-03-19 02:01:09.379794 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-19 02:01:09.379847 | orchestrator | Thursday 19 March 2026 02:00:20 +0000 (0:00:00.796) 0:03:00.730 ******** 2026-03-19 02:01:09.379856 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379863 | orchestrator | 2026-03-19 02:01:09.379869 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-19 02:01:09.379874 | orchestrator | Thursday 19 March 2026 02:00:20 +0000 (0:00:00.122) 0:03:00.853 ******** 2026-03-19 02:01:09.379880 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:01:09.379886 | orchestrator | 2026-03-19 02:01:09.379893 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-19 02:01:09.379900 | orchestrator | Thursday 19 March 2026 02:00:21 +0000 (0:00:00.931) 0:03:01.784 ******** 2026-03-19 02:01:09.379906 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379913 | orchestrator | 2026-03-19 02:01:09.379919 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-19 02:01:09.379925 | orchestrator | Thursday 19 March 2026 02:00:21 +0000 (0:00:00.135) 0:03:01.920 ******** 2026-03-19 02:01:09.379931 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379939 | orchestrator | 2026-03-19 02:01:09.379945 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-19 02:01:09.379951 | orchestrator | Thursday 19 March 2026 02:00:22 +0000 (0:00:00.120) 0:03:02.040 ******** 2026-03-19 02:01:09.379957 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379964 | orchestrator | 2026-03-19 02:01:09.379973 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-19 02:01:09.379986 | orchestrator | Thursday 19 March 2026 02:00:22 +0000 (0:00:00.119) 0:03:02.160 ******** 2026-03-19 02:01:09.379993 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:09.379999 | orchestrator | 2026-03-19 02:01:09.380004 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-19 02:01:09.380010 | orchestrator | Thursday 19 March 2026 02:00:22 +0000 (0:00:00.109) 0:03:02.269 ******** 2026-03-19 02:01:09.380016 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 02:01:09.380022 | orchestrator | 2026-03-19 02:01:09.380029 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-19 02:01:09.380035 | orchestrator | Thursday 19 March 2026 02:00:27 +0000 (0:00:05.081) 0:03:07.351 ******** 2026-03-19 02:01:09.380042 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-19 02:01:09.380049 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-19 02:01:09.380062 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-19 02:01:32.315516 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-19 02:01:32.315646 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-19 02:01:32.315662 | orchestrator | 2026-03-19 02:01:32.315676 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-19 02:01:32.315688 | orchestrator | Thursday 19 March 2026 02:01:09 +0000 (0:00:42.015) 0:03:49.367 ******** 2026-03-19 02:01:32.315700 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:01:32.315711 | orchestrator | 2026-03-19 02:01:32.315722 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-19 02:01:32.315733 | orchestrator | Thursday 19 March 2026 02:01:10 +0000 (0:00:01.187) 0:03:50.554 ******** 2026-03-19 02:01:32.315744 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 02:01:32.315755 | orchestrator | 2026-03-19 02:01:32.315766 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-19 02:01:32.315777 | orchestrator | Thursday 19 March 2026 02:01:12 +0000 (0:00:01.465) 0:03:52.020 ******** 2026-03-19 02:01:32.315788 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 02:01:32.315799 | orchestrator | 2026-03-19 02:01:32.315810 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-19 02:01:32.315821 | orchestrator | Thursday 19 March 2026 02:01:13 +0000 (0:00:01.207) 0:03:53.228 ******** 2026-03-19 02:01:32.315885 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:32.315898 | orchestrator | 2026-03-19 02:01:32.315908 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-19 02:01:32.315919 | orchestrator | Thursday 19 March 2026 02:01:13 +0000 (0:00:00.126) 0:03:53.355 ******** 2026-03-19 02:01:32.315930 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-19 02:01:32.315943 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-19 02:01:32.315953 | orchestrator | 2026-03-19 02:01:32.315964 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-19 02:01:32.315975 | orchestrator | Thursday 19 March 2026 02:01:15 +0000 (0:00:01.893) 0:03:55.248 ******** 2026-03-19 02:01:32.315986 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:32.316000 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:01:32.316012 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:01:32.316025 | orchestrator | 2026-03-19 02:01:32.316037 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-19 02:01:32.316049 | orchestrator | Thursday 19 March 2026 02:01:15 +0000 (0:00:00.281) 0:03:55.530 ******** 2026-03-19 02:01:32.316062 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:01:32.316075 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:01:32.316087 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:01:32.316099 | orchestrator | 2026-03-19 02:01:32.316112 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-19 02:01:32.316124 | orchestrator | 2026-03-19 02:01:32.316136 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-19 02:01:32.316150 | orchestrator | Thursday 19 March 2026 02:01:16 +0000 (0:00:00.886) 0:03:56.416 ******** 2026-03-19 02:01:32.316162 | orchestrator | ok: [testbed-manager] 2026-03-19 02:01:32.316174 | orchestrator | 2026-03-19 02:01:32.316187 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-19 02:01:32.316201 | orchestrator | Thursday 19 March 2026 02:01:16 +0000 (0:00:00.394) 0:03:56.811 ******** 2026-03-19 02:01:32.316213 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 02:01:32.316225 | orchestrator | 2026-03-19 02:01:32.316244 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-19 02:01:32.316263 | orchestrator | Thursday 19 March 2026 02:01:17 +0000 (0:00:00.246) 0:03:57.058 ******** 2026-03-19 02:01:32.316281 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:32.316310 | orchestrator | 2026-03-19 02:01:32.316328 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-19 02:01:32.316346 | orchestrator | 2026-03-19 02:01:32.316363 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-19 02:01:32.316382 | orchestrator | Thursday 19 March 2026 02:01:22 +0000 (0:00:05.871) 0:04:02.929 ******** 2026-03-19 02:01:32.316398 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:01:32.316415 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:01:32.316433 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:01:32.316451 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:01:32.316468 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:01:32.316485 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:01:32.316502 | orchestrator | 2026-03-19 02:01:32.316520 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-19 02:01:32.316537 | orchestrator | Thursday 19 March 2026 02:01:23 +0000 (0:00:00.576) 0:04:03.505 ******** 2026-03-19 02:01:32.316556 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 02:01:32.316574 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 02:01:32.316589 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 02:01:32.316605 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 02:01:32.316640 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 02:01:32.316658 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 02:01:32.316675 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 02:01:32.316694 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 02:01:32.316711 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 02:01:32.316757 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 02:01:32.316779 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 02:01:32.316799 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 02:01:32.316816 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 02:01:32.316953 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 02:01:32.316976 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 02:01:32.317019 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 02:01:32.317038 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 02:01:32.317056 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 02:01:32.317076 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 02:01:32.317095 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 02:01:32.317112 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 02:01:32.317128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 02:01:32.317139 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 02:01:32.317150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 02:01:32.317161 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 02:01:32.317172 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 02:01:32.317182 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 02:01:32.317193 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 02:01:32.317204 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 02:01:32.317215 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 02:01:32.317226 | orchestrator | 2026-03-19 02:01:32.317237 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-19 02:01:32.317247 | orchestrator | Thursday 19 March 2026 02:01:31 +0000 (0:00:07.635) 0:04:11.141 ******** 2026-03-19 02:01:32.317258 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:01:32.317269 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:01:32.317280 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:01:32.317291 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:32.317302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:01:32.317313 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:01:32.317323 | orchestrator | 2026-03-19 02:01:32.317334 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-19 02:01:32.317345 | orchestrator | Thursday 19 March 2026 02:01:31 +0000 (0:00:00.534) 0:04:11.676 ******** 2026-03-19 02:01:32.317356 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:01:32.317379 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:01:32.317390 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:01:32.317400 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:01:32.317411 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:01:32.317422 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:01:32.317432 | orchestrator | 2026-03-19 02:01:32.317443 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:01:32.317455 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:01:32.317469 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-19 02:01:32.317480 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 02:01:32.317491 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 02:01:32.317502 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 02:01:32.317513 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 02:01:32.317523 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 02:01:32.317534 | orchestrator | 2026-03-19 02:01:32.317545 | orchestrator | 2026-03-19 02:01:32.317556 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:01:32.317567 | orchestrator | Thursday 19 March 2026 02:01:32 +0000 (0:00:00.620) 0:04:12.297 ******** 2026-03-19 02:01:32.317590 | orchestrator | =============================================================================== 2026-03-19 02:01:32.663444 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.17s 2026-03-19 02:01:32.663524 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.02s 2026-03-19 02:01:32.663531 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.50s 2026-03-19 02:01:32.663536 | orchestrator | kubectl : Install required packages ------------------------------------ 11.85s 2026-03-19 02:01:32.663542 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.14s 2026-03-19 02:01:32.663547 | orchestrator | Manage labels ----------------------------------------------------------- 7.64s 2026-03-19 02:01:32.663551 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.97s 2026-03-19 02:01:32.663556 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.87s 2026-03-19 02:01:32.663561 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.20s 2026-03-19 02:01:32.663566 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.08s 2026-03-19 02:01:32.663571 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2026-03-19 02:01:32.663577 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.98s 2026-03-19 02:01:32.663582 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.89s 2026-03-19 02:01:32.663587 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.82s 2026-03-19 02:01:32.663592 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.63s 2026-03-19 02:01:32.663596 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.61s 2026-03-19 02:01:32.663601 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2026-03-19 02:01:32.663630 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.50s 2026-03-19 02:01:32.663635 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.47s 2026-03-19 02:01:32.663640 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.33s 2026-03-19 02:01:32.937257 | orchestrator | + osism apply copy-kubeconfig 2026-03-19 02:01:45.109535 | orchestrator | 2026-03-19 02:01:45 | INFO  | Task 5a7a381f-5c26-43ee-b9bf-65f51f2fa917 (copy-kubeconfig) was prepared for execution. 2026-03-19 02:01:45.109666 | orchestrator | 2026-03-19 02:01:45 | INFO  | It takes a moment until task 5a7a381f-5c26-43ee-b9bf-65f51f2fa917 (copy-kubeconfig) has been started and output is visible here. 2026-03-19 02:01:52.022461 | orchestrator | 2026-03-19 02:01:52.022604 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-19 02:01:52.022645 | orchestrator | 2026-03-19 02:01:52.022674 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 02:01:52.022691 | orchestrator | Thursday 19 March 2026 02:01:49 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-03-19 02:01:52.022706 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 02:01:52.022722 | orchestrator | 2026-03-19 02:01:52.022737 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 02:01:52.022752 | orchestrator | Thursday 19 March 2026 02:01:50 +0000 (0:00:00.726) 0:00:00.880 ******** 2026-03-19 02:01:52.022793 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:52.022827 | orchestrator | 2026-03-19 02:01:52.022843 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-19 02:01:52.022925 | orchestrator | Thursday 19 March 2026 02:01:51 +0000 (0:00:01.198) 0:00:02.078 ******** 2026-03-19 02:01:52.022948 | orchestrator | changed: [testbed-manager] 2026-03-19 02:01:52.022963 | orchestrator | 2026-03-19 02:01:52.022984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:01:52.023002 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:01:52.023019 | orchestrator | 2026-03-19 02:01:52.023034 | orchestrator | 2026-03-19 02:01:52.023050 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:01:52.023066 | orchestrator | Thursday 19 March 2026 02:01:51 +0000 (0:00:00.454) 0:00:02.533 ******** 2026-03-19 02:01:52.023082 | orchestrator | =============================================================================== 2026-03-19 02:01:52.023098 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2026-03-19 02:01:52.023113 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-03-19 02:01:52.023129 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.45s 2026-03-19 02:01:52.304896 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-19 02:02:04.356295 | orchestrator | 2026-03-19 02:02:04 | INFO  | Task 271fa943-5257-4570-b786-ef895eaadc7d (openstackclient) was prepared for execution. 2026-03-19 02:02:04.356406 | orchestrator | 2026-03-19 02:02:04 | INFO  | It takes a moment until task 271fa943-5257-4570-b786-ef895eaadc7d (openstackclient) has been started and output is visible here. 2026-03-19 02:02:49.328070 | orchestrator | 2026-03-19 02:02:49.328232 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-19 02:02:49.328259 | orchestrator | 2026-03-19 02:02:49.328272 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-19 02:02:49.328283 | orchestrator | Thursday 19 March 2026 02:02:08 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-03-19 02:02:49.328295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-19 02:02:49.328307 | orchestrator | 2026-03-19 02:02:49.328378 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-19 02:02:49.328391 | orchestrator | Thursday 19 March 2026 02:02:08 +0000 (0:00:00.204) 0:00:00.376 ******** 2026-03-19 02:02:49.328402 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-19 02:02:49.328414 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-19 02:02:49.328425 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-19 02:02:49.328436 | orchestrator | 2026-03-19 02:02:49.328446 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-19 02:02:49.328457 | orchestrator | Thursday 19 March 2026 02:02:09 +0000 (0:00:01.077) 0:00:01.453 ******** 2026-03-19 02:02:49.328468 | orchestrator | changed: [testbed-manager] 2026-03-19 02:02:49.328480 | orchestrator | 2026-03-19 02:02:49.328491 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-19 02:02:49.328501 | orchestrator | Thursday 19 March 2026 02:02:10 +0000 (0:00:01.169) 0:00:02.623 ******** 2026-03-19 02:02:49.328512 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-19 02:02:49.328524 | orchestrator | ok: [testbed-manager] 2026-03-19 02:02:49.328538 | orchestrator | 2026-03-19 02:02:49.328551 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-19 02:02:49.328571 | orchestrator | Thursday 19 March 2026 02:02:44 +0000 (0:00:33.172) 0:00:35.796 ******** 2026-03-19 02:02:49.328589 | orchestrator | changed: [testbed-manager] 2026-03-19 02:02:49.328609 | orchestrator | 2026-03-19 02:02:49.328628 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-19 02:02:49.328646 | orchestrator | Thursday 19 March 2026 02:02:45 +0000 (0:00:00.915) 0:00:36.712 ******** 2026-03-19 02:02:49.328662 | orchestrator | ok: [testbed-manager] 2026-03-19 02:02:49.328682 | orchestrator | 2026-03-19 02:02:49.328702 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-19 02:02:49.328722 | orchestrator | Thursday 19 March 2026 02:02:45 +0000 (0:00:00.640) 0:00:37.352 ******** 2026-03-19 02:02:49.328743 | orchestrator | changed: [testbed-manager] 2026-03-19 02:02:49.328763 | orchestrator | 2026-03-19 02:02:49.328783 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-19 02:02:49.328803 | orchestrator | Thursday 19 March 2026 02:02:47 +0000 (0:00:01.514) 0:00:38.867 ******** 2026-03-19 02:02:49.328824 | orchestrator | changed: [testbed-manager] 2026-03-19 02:02:49.328845 | orchestrator | 2026-03-19 02:02:49.328863 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-19 02:02:49.328882 | orchestrator | Thursday 19 March 2026 02:02:47 +0000 (0:00:00.694) 0:00:39.561 ******** 2026-03-19 02:02:49.328900 | orchestrator | changed: [testbed-manager] 2026-03-19 02:02:49.328946 | orchestrator | 2026-03-19 02:02:49.328967 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-19 02:02:49.328986 | orchestrator | Thursday 19 March 2026 02:02:48 +0000 (0:00:00.602) 0:00:40.163 ******** 2026-03-19 02:02:49.329003 | orchestrator | ok: [testbed-manager] 2026-03-19 02:02:49.329022 | orchestrator | 2026-03-19 02:02:49.329036 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:02:49.329047 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:02:49.329059 | orchestrator | 2026-03-19 02:02:49.329070 | orchestrator | 2026-03-19 02:02:49.329081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:02:49.329092 | orchestrator | Thursday 19 March 2026 02:02:48 +0000 (0:00:00.428) 0:00:40.592 ******** 2026-03-19 02:02:49.329103 | orchestrator | =============================================================================== 2026-03-19 02:02:49.329113 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.17s 2026-03-19 02:02:49.329124 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.51s 2026-03-19 02:02:49.329148 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.17s 2026-03-19 02:02:49.329159 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.08s 2026-03-19 02:02:49.329170 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.92s 2026-03-19 02:02:49.329181 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.69s 2026-03-19 02:02:49.329192 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2026-03-19 02:02:49.329202 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-03-19 02:02:49.329213 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2026-03-19 02:02:49.329224 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.20s 2026-03-19 02:02:51.576389 | orchestrator | 2026-03-19 02:02:51 | INFO  | Task 70059751-339f-435b-a8f5-ab7e21721976 (common) was prepared for execution. 2026-03-19 02:02:51.576474 | orchestrator | 2026-03-19 02:02:51 | INFO  | It takes a moment until task 70059751-339f-435b-a8f5-ab7e21721976 (common) has been started and output is visible here. 2026-03-19 02:03:02.756518 | orchestrator | 2026-03-19 02:03:02.756623 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-19 02:03:02.756635 | orchestrator | 2026-03-19 02:03:02.756643 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 02:03:02.756651 | orchestrator | Thursday 19 March 2026 02:02:55 +0000 (0:00:00.208) 0:00:00.208 ******** 2026-03-19 02:03:02.756659 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:03:02.756667 | orchestrator | 2026-03-19 02:03:02.756674 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-19 02:03:02.756681 | orchestrator | Thursday 19 March 2026 02:02:56 +0000 (0:00:01.052) 0:00:01.261 ******** 2026-03-19 02:03:02.756688 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756695 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756704 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756716 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756726 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756737 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756748 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756759 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756770 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 02:03:02.756803 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756813 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756821 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756828 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756834 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756841 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756849 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 02:03:02.756855 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756884 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756892 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756899 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756906 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 02:03:02.756912 | orchestrator | 2026-03-19 02:03:02.756919 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 02:03:02.756954 | orchestrator | Thursday 19 March 2026 02:02:58 +0000 (0:00:02.403) 0:00:03.664 ******** 2026-03-19 02:03:02.756969 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:03:02.756987 | orchestrator | 2026-03-19 02:03:02.756997 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-19 02:03:02.757012 | orchestrator | Thursday 19 March 2026 02:03:00 +0000 (0:00:01.163) 0:00:04.828 ******** 2026-03-19 02:03:02.757025 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:02.757144 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:02.757156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:02.757175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882774 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:03.882854 | orchestrator | 2026-03-19 02:03:03.882861 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-19 02:03:03.882868 | orchestrator | Thursday 19 March 2026 02:03:03 +0000 (0:00:03.429) 0:00:08.257 ******** 2026-03-19 02:03:03.882876 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:03.882883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:03.882889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:03.882895 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:03:03.882902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:03.882916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420538 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:03:04.420608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:04.420626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420650 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:03:04.420661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:04.420678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420701 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:03:04.420731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:04.420751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420774 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:03:04.420786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:04.420797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:04.420820 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:03:04.420832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:04.420851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.218643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.218778 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:03:05.218803 | orchestrator | 2026-03-19 02:03:05.218823 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-19 02:03:05.218843 | orchestrator | Thursday 19 March 2026 02:03:04 +0000 (0:00:00.821) 0:00:09.079 ******** 2026-03-19 02:03:05.218864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:05.218886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.218904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.218923 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:03:05.219004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:05.219033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219105 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:03:05.219160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:05.219183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219221 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:03:05.219241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:05.219260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:05.219316 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:03:05.219336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:05.219383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983168 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:03:09.983182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:09.983191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983205 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:03:09.983212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 02:03:09.983242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:09.983256 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:03:09.983262 | orchestrator | 2026-03-19 02:03:09.983270 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-19 02:03:09.983278 | orchestrator | Thursday 19 March 2026 02:03:06 +0000 (0:00:01.658) 0:00:10.737 ******** 2026-03-19 02:03:09.983285 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:03:09.983291 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:03:09.983298 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:03:09.983304 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:03:09.983328 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:03:09.983334 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:03:09.983341 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:03:09.983347 | orchestrator | 2026-03-19 02:03:09.983355 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-19 02:03:09.983364 | orchestrator | Thursday 19 March 2026 02:03:06 +0000 (0:00:00.645) 0:00:11.382 ******** 2026-03-19 02:03:09.983371 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:03:09.983377 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:03:09.983383 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:03:09.983389 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:03:09.983395 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:03:09.983401 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:03:09.983407 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:03:09.983413 | orchestrator | 2026-03-19 02:03:09.983419 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-19 02:03:09.983425 | orchestrator | Thursday 19 March 2026 02:03:07 +0000 (0:00:00.797) 0:00:12.180 ******** 2026-03-19 02:03:09.983433 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:09.983523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:12.846497 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846754 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:12.846867 | orchestrator | 2026-03-19 02:03:12.846880 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-19 02:03:12.846891 | orchestrator | Thursday 19 March 2026 02:03:10 +0000 (0:00:03.467) 0:00:15.647 ******** 2026-03-19 02:03:12.846902 | orchestrator | [WARNING]: Skipped 2026-03-19 02:03:12.846915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-19 02:03:12.846928 | orchestrator | to this access issue: 2026-03-19 02:03:12.846971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-19 02:03:12.846983 | orchestrator | directory 2026-03-19 02:03:12.846995 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 02:03:12.847007 | orchestrator | 2026-03-19 02:03:12.847018 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-19 02:03:12.847031 | orchestrator | Thursday 19 March 2026 02:03:11 +0000 (0:00:00.946) 0:00:16.594 ******** 2026-03-19 02:03:12.847044 | orchestrator | [WARNING]: Skipped 2026-03-19 02:03:12.847065 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-19 02:03:22.100297 | orchestrator | to this access issue: 2026-03-19 02:03:22.100385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-19 02:03:22.100392 | orchestrator | directory 2026-03-19 02:03:22.100398 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 02:03:22.100404 | orchestrator | 2026-03-19 02:03:22.100409 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-19 02:03:22.100415 | orchestrator | Thursday 19 March 2026 02:03:13 +0000 (0:00:01.166) 0:00:17.760 ******** 2026-03-19 02:03:22.100439 | orchestrator | [WARNING]: Skipped 2026-03-19 02:03:22.100444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-19 02:03:22.100448 | orchestrator | to this access issue: 2026-03-19 02:03:22.100452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-19 02:03:22.100457 | orchestrator | directory 2026-03-19 02:03:22.100461 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 02:03:22.100465 | orchestrator | 2026-03-19 02:03:22.100469 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-19 02:03:22.100474 | orchestrator | Thursday 19 March 2026 02:03:13 +0000 (0:00:00.831) 0:00:18.592 ******** 2026-03-19 02:03:22.100478 | orchestrator | [WARNING]: Skipped 2026-03-19 02:03:22.100482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-19 02:03:22.100486 | orchestrator | to this access issue: 2026-03-19 02:03:22.100490 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-19 02:03:22.100494 | orchestrator | directory 2026-03-19 02:03:22.100498 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 02:03:22.100503 | orchestrator | 2026-03-19 02:03:22.100507 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-19 02:03:22.100511 | orchestrator | Thursday 19 March 2026 02:03:14 +0000 (0:00:00.823) 0:00:19.416 ******** 2026-03-19 02:03:22.100515 | orchestrator | changed: [testbed-manager] 2026-03-19 02:03:22.100519 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:03:22.100523 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:03:22.100528 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:03:22.100532 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:03:22.100536 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:03:22.100554 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:03:22.100559 | orchestrator | 2026-03-19 02:03:22.100563 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-19 02:03:22.100567 | orchestrator | Thursday 19 March 2026 02:03:17 +0000 (0:00:02.407) 0:00:21.823 ******** 2026-03-19 02:03:22.100571 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100581 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100585 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100589 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100593 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100600 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 02:03:22.100604 | orchestrator | 2026-03-19 02:03:22.100609 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-19 02:03:22.100613 | orchestrator | Thursday 19 March 2026 02:03:19 +0000 (0:00:01.997) 0:00:23.821 ******** 2026-03-19 02:03:22.100617 | orchestrator | changed: [testbed-manager] 2026-03-19 02:03:22.100621 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:03:22.100625 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:03:22.100629 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:03:22.100633 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:03:22.100637 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:03:22.100641 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:03:22.100645 | orchestrator | 2026-03-19 02:03:22.100650 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-19 02:03:22.100658 | orchestrator | Thursday 19 March 2026 02:03:21 +0000 (0:00:01.876) 0:00:25.698 ******** 2026-03-19 02:03:22.100663 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:22.100680 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:22.100685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:22.100690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:22.100694 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:22.100701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:22.100712 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:22.100721 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:22.100726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:22.100734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.143589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:28.143735 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.143751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:28.143806 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143815 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.143823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:03:28.143858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143867 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143875 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143883 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.143892 | orchestrator | 2026-03-19 02:03:28.143901 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-19 02:03:28.143910 | orchestrator | Thursday 19 March 2026 02:03:22 +0000 (0:00:01.510) 0:00:27.208 ******** 2026-03-19 02:03:28.143918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.143926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.143940 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.143948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.144019 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.144028 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.144035 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 02:03:28.144043 | orchestrator | 2026-03-19 02:03:28.144050 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-19 02:03:28.144057 | orchestrator | Thursday 19 March 2026 02:03:24 +0000 (0:00:01.954) 0:00:29.163 ******** 2026-03-19 02:03:28.144065 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144104 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144111 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144118 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 02:03:28.144125 | orchestrator | 2026-03-19 02:03:28.144133 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-19 02:03:28.144141 | orchestrator | Thursday 19 March 2026 02:03:26 +0000 (0:00:01.683) 0:00:30.846 ******** 2026-03-19 02:03:28.144148 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.144166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 02:03:28.798361 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798464 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:03:28.798512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:04:47.753571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:04:47.753757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:04:47.753786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:04:47.753825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:04:47.753843 | orchestrator | 2026-03-19 02:04:47.753863 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-19 02:04:47.753881 | orchestrator | Thursday 19 March 2026 02:03:28 +0000 (0:00:02.611) 0:00:33.458 ******** 2026-03-19 02:04:47.753899 | orchestrator | changed: [testbed-manager] 2026-03-19 02:04:47.753911 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:04:47.753921 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:04:47.753930 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:04:47.753941 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:04:47.753950 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:04:47.753960 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:04:47.753969 | orchestrator | 2026-03-19 02:04:47.753980 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-19 02:04:47.753989 | orchestrator | Thursday 19 March 2026 02:03:30 +0000 (0:00:01.414) 0:00:34.872 ******** 2026-03-19 02:04:47.753999 | orchestrator | changed: [testbed-manager] 2026-03-19 02:04:47.754008 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:04:47.754105 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:04:47.754119 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:04:47.754130 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:04:47.754141 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:04:47.754153 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:04:47.754164 | orchestrator | 2026-03-19 02:04:47.754175 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754186 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:01.109) 0:00:35.982 ******** 2026-03-19 02:04:47.754212 | orchestrator | 2026-03-19 02:04:47.754224 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754245 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.064) 0:00:36.046 ******** 2026-03-19 02:04:47.754257 | orchestrator | 2026-03-19 02:04:47.754268 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754280 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.063) 0:00:36.110 ******** 2026-03-19 02:04:47.754290 | orchestrator | 2026-03-19 02:04:47.754301 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754312 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.061) 0:00:36.172 ******** 2026-03-19 02:04:47.754328 | orchestrator | 2026-03-19 02:04:47.754351 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754385 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.214) 0:00:36.387 ******** 2026-03-19 02:04:47.754403 | orchestrator | 2026-03-19 02:04:47.754421 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754439 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.060) 0:00:36.447 ******** 2026-03-19 02:04:47.754458 | orchestrator | 2026-03-19 02:04:47.754475 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 02:04:47.754490 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.057) 0:00:36.505 ******** 2026-03-19 02:04:47.754506 | orchestrator | 2026-03-19 02:04:47.754521 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-19 02:04:47.754537 | orchestrator | Thursday 19 March 2026 02:03:31 +0000 (0:00:00.086) 0:00:36.591 ******** 2026-03-19 02:04:47.754553 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:04:47.754568 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:04:47.754583 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:04:47.754598 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:04:47.754614 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:04:47.754657 | orchestrator | changed: [testbed-manager] 2026-03-19 02:04:47.754674 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:04:47.754691 | orchestrator | 2026-03-19 02:04:47.754709 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-19 02:04:47.754726 | orchestrator | Thursday 19 March 2026 02:04:04 +0000 (0:00:32.685) 0:01:09.277 ******** 2026-03-19 02:04:47.754741 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:04:47.754754 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:04:47.754764 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:04:47.754773 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:04:47.754783 | orchestrator | changed: [testbed-manager] 2026-03-19 02:04:47.754792 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:04:47.754802 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:04:47.754811 | orchestrator | 2026-03-19 02:04:47.754821 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-19 02:04:47.754831 | orchestrator | Thursday 19 March 2026 02:04:41 +0000 (0:00:37.313) 0:01:46.590 ******** 2026-03-19 02:04:47.754840 | orchestrator | ok: [testbed-manager] 2026-03-19 02:04:47.754851 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:04:47.754860 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:04:47.754876 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:04:47.754898 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:04:47.754918 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:04:47.754932 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:04:47.754946 | orchestrator | 2026-03-19 02:04:47.754960 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-19 02:04:47.754976 | orchestrator | Thursday 19 March 2026 02:04:43 +0000 (0:00:01.989) 0:01:48.580 ******** 2026-03-19 02:04:47.754990 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:04:47.755004 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:04:47.755019 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:04:47.755034 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:04:47.755086 | orchestrator | changed: [testbed-manager] 2026-03-19 02:04:47.755101 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:04:47.755118 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:04:47.755134 | orchestrator | 2026-03-19 02:04:47.755151 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:04:47.755169 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755190 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755225 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755253 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755263 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755273 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755283 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 02:04:47.755293 | orchestrator | 2026-03-19 02:04:47.755303 | orchestrator | 2026-03-19 02:04:47.755314 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:04:47.755328 | orchestrator | Thursday 19 March 2026 02:04:47 +0000 (0:00:03.809) 0:01:52.389 ******** 2026-03-19 02:04:47.755344 | orchestrator | =============================================================================== 2026-03-19 02:04:47.755360 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.31s 2026-03-19 02:04:47.755375 | orchestrator | common : Restart fluentd container ------------------------------------- 32.69s 2026-03-19 02:04:47.755398 | orchestrator | common : Restart cron container ----------------------------------------- 3.81s 2026-03-19 02:04:47.755415 | orchestrator | common : Copying over config.json files for services -------------------- 3.47s 2026-03-19 02:04:47.755430 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.43s 2026-03-19 02:04:47.755446 | orchestrator | common : Check common containers ---------------------------------------- 2.61s 2026-03-19 02:04:47.755462 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.41s 2026-03-19 02:04:47.755479 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.40s 2026-03-19 02:04:47.755495 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.00s 2026-03-19 02:04:47.755511 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.99s 2026-03-19 02:04:47.755532 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.95s 2026-03-19 02:04:47.755555 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.88s 2026-03-19 02:04:47.755571 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.68s 2026-03-19 02:04:47.755587 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.66s 2026-03-19 02:04:47.755602 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.51s 2026-03-19 02:04:47.755618 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-03-19 02:04:47.755646 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.17s 2026-03-19 02:04:48.161358 | orchestrator | common : include_tasks -------------------------------------------------- 1.16s 2026-03-19 02:04:48.161450 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.11s 2026-03-19 02:04:48.161460 | orchestrator | common : include_tasks -------------------------------------------------- 1.05s 2026-03-19 02:04:50.554417 | orchestrator | 2026-03-19 02:04:50 | INFO  | Task 010eaa22-2362-47aa-b4b4-2c5e7fdc2ec5 (loadbalancer) was prepared for execution. 2026-03-19 02:04:50.554525 | orchestrator | 2026-03-19 02:04:50 | INFO  | It takes a moment until task 010eaa22-2362-47aa-b4b4-2c5e7fdc2ec5 (loadbalancer) has been started and output is visible here. 2026-03-19 02:05:06.231317 | orchestrator | 2026-03-19 02:05:06.231440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:05:06.231455 | orchestrator | 2026-03-19 02:05:06.231464 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:05:06.231472 | orchestrator | Thursday 19 March 2026 02:04:54 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-03-19 02:05:06.231502 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:05:06.231512 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:05:06.231520 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:05:06.231527 | orchestrator | 2026-03-19 02:05:06.231534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:05:06.231542 | orchestrator | Thursday 19 March 2026 02:04:54 +0000 (0:00:00.280) 0:00:00.527 ******** 2026-03-19 02:05:06.231550 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-19 02:05:06.231558 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-19 02:05:06.231565 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-19 02:05:06.231572 | orchestrator | 2026-03-19 02:05:06.231580 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-19 02:05:06.231587 | orchestrator | 2026-03-19 02:05:06.231594 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 02:05:06.231616 | orchestrator | Thursday 19 March 2026 02:04:55 +0000 (0:00:00.407) 0:00:00.935 ******** 2026-03-19 02:05:06.231624 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:05:06.231631 | orchestrator | 2026-03-19 02:05:06.231639 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-19 02:05:06.231646 | orchestrator | Thursday 19 March 2026 02:04:55 +0000 (0:00:00.541) 0:00:01.476 ******** 2026-03-19 02:05:06.231654 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:05:06.231661 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:05:06.231668 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:05:06.231675 | orchestrator | 2026-03-19 02:05:06.231682 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-19 02:05:06.231690 | orchestrator | Thursday 19 March 2026 02:04:57 +0000 (0:00:01.579) 0:00:03.056 ******** 2026-03-19 02:05:06.231697 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:05:06.231704 | orchestrator | 2026-03-19 02:05:06.231711 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-19 02:05:06.231719 | orchestrator | Thursday 19 March 2026 02:04:58 +0000 (0:00:00.677) 0:00:03.733 ******** 2026-03-19 02:05:06.231727 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:05:06.231734 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:05:06.231741 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:05:06.231748 | orchestrator | 2026-03-19 02:05:06.231756 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-19 02:05:06.231763 | orchestrator | Thursday 19 March 2026 02:04:58 +0000 (0:00:00.615) 0:00:04.349 ******** 2026-03-19 02:05:06.231770 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231778 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231799 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 02:05:06.231808 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 02:05:06.231815 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 02:05:06.231829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 02:05:06.231836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 02:05:06.231843 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 02:05:06.231857 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 02:05:06.231866 | orchestrator | 2026-03-19 02:05:06.231875 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 02:05:06.231884 | orchestrator | Thursday 19 March 2026 02:05:01 +0000 (0:00:03.090) 0:00:07.439 ******** 2026-03-19 02:05:06.231893 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-19 02:05:06.231901 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-19 02:05:06.231910 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-19 02:05:06.231918 | orchestrator | 2026-03-19 02:05:06.231927 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 02:05:06.231936 | orchestrator | Thursday 19 March 2026 02:05:02 +0000 (0:00:00.712) 0:00:08.152 ******** 2026-03-19 02:05:06.231945 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-19 02:05:06.231955 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-19 02:05:06.231967 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-19 02:05:06.231978 | orchestrator | 2026-03-19 02:05:06.231989 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 02:05:06.232000 | orchestrator | Thursday 19 March 2026 02:05:03 +0000 (0:00:01.346) 0:00:09.498 ******** 2026-03-19 02:05:06.232011 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-19 02:05:06.232025 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:06.232103 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-19 02:05:06.232115 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:06.232123 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-19 02:05:06.232133 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:06.232141 | orchestrator | 2026-03-19 02:05:06.232150 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-19 02:05:06.232159 | orchestrator | Thursday 19 March 2026 02:05:04 +0000 (0:00:00.507) 0:00:10.005 ******** 2026-03-19 02:05:06.232175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:06.232190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:06.232199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:06.232214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:06.232223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:06.232239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:11.422372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:11.422494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:11.422513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:11.422526 | orchestrator | 2026-03-19 02:05:11.422541 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-19 02:05:11.422556 | orchestrator | Thursday 19 March 2026 02:05:06 +0000 (0:00:01.805) 0:00:11.811 ******** 2026-03-19 02:05:11.422567 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:05:11.422607 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:05:11.422620 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:05:11.422634 | orchestrator | 2026-03-19 02:05:11.422646 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-19 02:05:11.422659 | orchestrator | Thursday 19 March 2026 02:05:07 +0000 (0:00:00.887) 0:00:12.699 ******** 2026-03-19 02:05:11.422672 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-19 02:05:11.422685 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-19 02:05:11.422698 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-19 02:05:11.422711 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-19 02:05:11.422724 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-19 02:05:11.422737 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-19 02:05:11.422749 | orchestrator | 2026-03-19 02:05:11.422762 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-19 02:05:11.422774 | orchestrator | Thursday 19 March 2026 02:05:08 +0000 (0:00:01.543) 0:00:14.243 ******** 2026-03-19 02:05:11.422787 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:05:11.422798 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:05:11.422812 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:05:11.422825 | orchestrator | 2026-03-19 02:05:11.422838 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-19 02:05:11.422851 | orchestrator | Thursday 19 March 2026 02:05:09 +0000 (0:00:00.894) 0:00:15.137 ******** 2026-03-19 02:05:11.422864 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:05:11.422877 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:05:11.422890 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:05:11.422904 | orchestrator | 2026-03-19 02:05:11.422917 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-19 02:05:11.422930 | orchestrator | Thursday 19 March 2026 02:05:10 +0000 (0:00:01.318) 0:00:16.456 ******** 2026-03-19 02:05:11.422945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:11.422980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:11.422991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:11.423001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:11.423020 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:11.423029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:11.423113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:11.423132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:11.423147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:11.423156 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:11.423171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:14.241379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:14.241498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:14.241509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:14.241517 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:14.241526 | orchestrator | 2026-03-19 02:05:14.241534 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-19 02:05:14.241542 | orchestrator | Thursday 19 March 2026 02:05:11 +0000 (0:00:00.552) 0:00:17.008 ******** 2026-03-19 02:05:14.241549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:14.241557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:14.241564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:14.241603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:14.241612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:14.241619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:14.241626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:14.241633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:14.241640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:14.241668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:22.740708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:22.740845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37', '__omit_place_holder__0a58c9b11fd8c9570e4d5239af9123d46fab0d37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 02:05:22.740864 | orchestrator | 2026-03-19 02:05:22.740881 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-19 02:05:22.740896 | orchestrator | Thursday 19 March 2026 02:05:14 +0000 (0:00:02.817) 0:00:19.826 ******** 2026-03-19 02:05:22.740910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:22.740925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:22.740939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:22.740985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:22.741035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:22.741050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:22.741064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:22.741134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:22.741148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:22.741162 | orchestrator | 2026-03-19 02:05:22.741175 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-19 02:05:22.741188 | orchestrator | Thursday 19 March 2026 02:05:17 +0000 (0:00:03.254) 0:00:23.080 ******** 2026-03-19 02:05:22.741210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 02:05:22.741224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 02:05:22.741237 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 02:05:22.741250 | orchestrator | 2026-03-19 02:05:22.741263 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-19 02:05:22.741287 | orchestrator | Thursday 19 March 2026 02:05:19 +0000 (0:00:01.874) 0:00:24.954 ******** 2026-03-19 02:05:22.741300 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 02:05:22.741314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 02:05:22.741327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 02:05:22.741341 | orchestrator | 2026-03-19 02:05:22.741354 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-19 02:05:22.741367 | orchestrator | Thursday 19 March 2026 02:05:22 +0000 (0:00:02.837) 0:00:27.792 ******** 2026-03-19 02:05:22.741381 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:22.741394 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:22.741407 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:22.741419 | orchestrator | 2026-03-19 02:05:22.741441 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-19 02:05:34.298191 | orchestrator | Thursday 19 March 2026 02:05:22 +0000 (0:00:00.538) 0:00:28.330 ******** 2026-03-19 02:05:34.298340 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 02:05:34.298370 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 02:05:34.298382 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 02:05:34.298394 | orchestrator | 2026-03-19 02:05:34.298406 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-19 02:05:34.298418 | orchestrator | Thursday 19 March 2026 02:05:24 +0000 (0:00:02.031) 0:00:30.362 ******** 2026-03-19 02:05:34.298431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 02:05:34.298443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 02:05:34.298454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 02:05:34.298465 | orchestrator | 2026-03-19 02:05:34.298475 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-19 02:05:34.298486 | orchestrator | Thursday 19 March 2026 02:05:26 +0000 (0:00:02.018) 0:00:32.381 ******** 2026-03-19 02:05:34.298499 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-19 02:05:34.298511 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-19 02:05:34.298522 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-19 02:05:34.298533 | orchestrator | 2026-03-19 02:05:34.298556 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-19 02:05:34.298568 | orchestrator | Thursday 19 March 2026 02:05:28 +0000 (0:00:01.501) 0:00:33.883 ******** 2026-03-19 02:05:34.298583 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-19 02:05:34.298596 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-19 02:05:34.298609 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-19 02:05:34.298622 | orchestrator | 2026-03-19 02:05:34.298659 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 02:05:34.298672 | orchestrator | Thursday 19 March 2026 02:05:29 +0000 (0:00:01.419) 0:00:35.302 ******** 2026-03-19 02:05:34.298685 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:05:34.298698 | orchestrator | 2026-03-19 02:05:34.298710 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-19 02:05:34.298723 | orchestrator | Thursday 19 March 2026 02:05:30 +0000 (0:00:00.507) 0:00:35.809 ******** 2026-03-19 02:05:34.298739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:34.298863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:34.298875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:34.298887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:34.298898 | orchestrator | 2026-03-19 02:05:34.298910 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-19 02:05:34.298921 | orchestrator | Thursday 19 March 2026 02:05:33 +0000 (0:00:03.533) 0:00:39.343 ******** 2026-03-19 02:05:34.298947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.040000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.040202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.040249 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:35.040266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.040278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.040290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.040302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:35.040313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.040367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.040381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.040400 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:35.040412 | orchestrator | 2026-03-19 02:05:35.040425 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-19 02:05:35.040438 | orchestrator | Thursday 19 March 2026 02:05:34 +0000 (0:00:00.544) 0:00:39.887 ******** 2026-03-19 02:05:35.040451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.040463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.040474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.040488 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:35.040502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.040530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.838936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.839068 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:35.839160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.839176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.839185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.839193 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:35.839201 | orchestrator | 2026-03-19 02:05:35.839211 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 02:05:35.839221 | orchestrator | Thursday 19 March 2026 02:05:35 +0000 (0:00:00.738) 0:00:40.626 ******** 2026-03-19 02:05:35.839229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.839238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.839265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.839281 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:35.839290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.839298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.839333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.839341 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:35.839349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:35.839373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:35.839385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:35.839404 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:37.157051 | orchestrator | 2026-03-19 02:05:37.157175 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 02:05:37.157186 | orchestrator | Thursday 19 March 2026 02:05:35 +0000 (0:00:00.793) 0:00:41.420 ******** 2026-03-19 02:05:37.157199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:37.157211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:37.157220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:37.157227 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:37.157235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:37.157243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:37.157275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:37.157302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:37.157324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:37.157332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:37.157339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:37.157346 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:37.157353 | orchestrator | 2026-03-19 02:05:37.157360 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 02:05:37.157367 | orchestrator | Thursday 19 March 2026 02:05:36 +0000 (0:00:00.565) 0:00:41.985 ******** 2026-03-19 02:05:37.157375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:37.157382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:37.157402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:37.157410 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:37.157424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:38.120683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:38.120814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:38.120837 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:38.120856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:38.120873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:38.120888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:38.120938 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:38.120987 | orchestrator | 2026-03-19 02:05:38.121006 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-19 02:05:38.121022 | orchestrator | Thursday 19 March 2026 02:05:37 +0000 (0:00:00.758) 0:00:42.743 ******** 2026-03-19 02:05:38.121058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:38.121126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:38.121146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:38.121161 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:38.121178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:38.121193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:38.121223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:38.121240 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:38.121265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:38.121293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:39.403547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:39.403663 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:39.403680 | orchestrator | 2026-03-19 02:05:39.403692 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-19 02:05:39.403704 | orchestrator | Thursday 19 March 2026 02:05:38 +0000 (0:00:00.959) 0:00:43.703 ******** 2026-03-19 02:05:39.403717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:39.403731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:39.403772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:39.403785 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:39.403797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:39.403824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:39.403855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:39.403868 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:39.403879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:39.403890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:39.403909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:39.403920 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:39.403932 | orchestrator | 2026-03-19 02:05:39.403943 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-19 02:05:39.403954 | orchestrator | Thursday 19 March 2026 02:05:38 +0000 (0:00:00.562) 0:00:44.265 ******** 2026-03-19 02:05:39.403965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 02:05:39.403976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:39.404003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:45.922087 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:45.922371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 02:05:45.922417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:45.922468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:45.922480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:45.922493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 02:05:45.922521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 02:05:45.922535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 02:05:45.922548 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:45.922561 | orchestrator | 2026-03-19 02:05:45.922575 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-19 02:05:45.922589 | orchestrator | Thursday 19 March 2026 02:05:39 +0000 (0:00:00.723) 0:00:44.989 ******** 2026-03-19 02:05:45.922602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 02:05:45.922638 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 02:05:45.922650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 02:05:45.922661 | orchestrator | 2026-03-19 02:05:45.922672 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-19 02:05:45.922684 | orchestrator | Thursday 19 March 2026 02:05:41 +0000 (0:00:01.680) 0:00:46.670 ******** 2026-03-19 02:05:45.922696 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 02:05:45.922707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 02:05:45.922718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 02:05:45.922729 | orchestrator | 2026-03-19 02:05:45.922748 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-19 02:05:45.922766 | orchestrator | Thursday 19 March 2026 02:05:42 +0000 (0:00:01.677) 0:00:48.347 ******** 2026-03-19 02:05:45.922784 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:05:45.922802 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:05:45.922820 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:05:45.922837 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:05:45.922854 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:45.922874 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:05:45.922892 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:45.922910 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:05:45.922929 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:45.922947 | orchestrator | 2026-03-19 02:05:45.922967 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-19 02:05:45.922986 | orchestrator | Thursday 19 March 2026 02:05:43 +0000 (0:00:00.773) 0:00:49.121 ******** 2026-03-19 02:05:45.923005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:45.923026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:45.923055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 02:05:45.923088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:49.873750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:49.873889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 02:05:49.873919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:49.873942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:49.873961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 02:05:49.873983 | orchestrator | 2026-03-19 02:05:49.874139 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-19 02:05:49.874172 | orchestrator | Thursday 19 March 2026 02:05:45 +0000 (0:00:02.390) 0:00:51.511 ******** 2026-03-19 02:05:49.874192 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:05:49.874212 | orchestrator | 2026-03-19 02:05:49.874231 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-19 02:05:49.874254 | orchestrator | Thursday 19 March 2026 02:05:46 +0000 (0:00:00.744) 0:00:52.256 ******** 2026-03-19 02:05:49.874300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 02:05:49.874354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:49.874373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:49.874390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:49.874407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 02:05:49.874431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:49.874444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:49.874475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.476843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 02:05:50.476960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:50.476970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.476991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.476997 | orchestrator | 2026-03-19 02:05:50.477004 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-19 02:05:50.477010 | orchestrator | Thursday 19 March 2026 02:05:49 +0000 (0:00:03.197) 0:00:55.454 ******** 2026-03-19 02:05:50.477017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 02:05:50.477052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:50.477059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.477064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.477070 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:50.477076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 02:05:50.477085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:50.477112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.477118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:50.477124 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:50.477134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 02:05:58.651360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 02:05:58.651502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:58.651517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 02:05:58.651553 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:58.651565 | orchestrator | 2026-03-19 02:05:58.651576 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-19 02:05:58.651587 | orchestrator | Thursday 19 March 2026 02:05:50 +0000 (0:00:00.601) 0:00:56.056 ******** 2026-03-19 02:05:58.651598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651623 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:58.651649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651667 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:58.651676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-19 02:05:58.651694 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:05:58.651703 | orchestrator | 2026-03-19 02:05:58.651712 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-19 02:05:58.651721 | orchestrator | Thursday 19 March 2026 02:05:51 +0000 (0:00:01.047) 0:00:57.104 ******** 2026-03-19 02:05:58.651730 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:05:58.651739 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:05:58.651748 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:05:58.651756 | orchestrator | 2026-03-19 02:05:58.651766 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-19 02:05:58.651775 | orchestrator | Thursday 19 March 2026 02:05:52 +0000 (0:00:01.409) 0:00:58.513 ******** 2026-03-19 02:05:58.651784 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:05:58.651793 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:05:58.651802 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:05:58.651810 | orchestrator | 2026-03-19 02:05:58.651819 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-19 02:05:58.651828 | orchestrator | Thursday 19 March 2026 02:05:54 +0000 (0:00:01.929) 0:01:00.442 ******** 2026-03-19 02:05:58.651837 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:05:58.651846 | orchestrator | 2026-03-19 02:05:58.651876 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-19 02:05:58.651887 | orchestrator | Thursday 19 March 2026 02:05:55 +0000 (0:00:00.601) 0:01:01.043 ******** 2026-03-19 02:05:58.651901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 02:05:58.651926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:58.651938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:05:58.651948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 02:05:58.651958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:58.651975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.233977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 02:05:59.234236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234284 | orchestrator | 2026-03-19 02:05:59.234298 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-19 02:05:59.234311 | orchestrator | Thursday 19 March 2026 02:05:58 +0000 (0:00:03.192) 0:01:04.236 ******** 2026-03-19 02:05:59.234325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 02:05:59.234337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234411 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:05:59.234431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 02:05:59.234443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:05:59.234470 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:05:59.234483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 02:05:59.234514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 02:06:08.327806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:08.327934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:08.327961 | orchestrator | 2026-03-19 02:06:08.327983 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-19 02:06:08.328004 | orchestrator | Thursday 19 March 2026 02:05:59 +0000 (0:00:00.585) 0:01:04.822 ******** 2026-03-19 02:06:08.328047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328095 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:08.328188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:08.328250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-19 02:06:08.328285 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:08.328304 | orchestrator | 2026-03-19 02:06:08.328323 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-19 02:06:08.328342 | orchestrator | Thursday 19 March 2026 02:06:00 +0000 (0:00:00.776) 0:01:05.598 ******** 2026-03-19 02:06:08.328361 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:08.328380 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:08.328399 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:08.328417 | orchestrator | 2026-03-19 02:06:08.328436 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-19 02:06:08.328456 | orchestrator | Thursday 19 March 2026 02:06:01 +0000 (0:00:01.488) 0:01:07.086 ******** 2026-03-19 02:06:08.328514 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:08.328536 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:08.328556 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:08.328574 | orchestrator | 2026-03-19 02:06:08.328592 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-19 02:06:08.328611 | orchestrator | Thursday 19 March 2026 02:06:03 +0000 (0:00:01.961) 0:01:09.048 ******** 2026-03-19 02:06:08.328629 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:08.328648 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:08.328667 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:08.328687 | orchestrator | 2026-03-19 02:06:08.328705 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-19 02:06:08.328722 | orchestrator | Thursday 19 March 2026 02:06:03 +0000 (0:00:00.286) 0:01:09.335 ******** 2026-03-19 02:06:08.328738 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:08.328754 | orchestrator | 2026-03-19 02:06:08.328773 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-19 02:06:08.328790 | orchestrator | Thursday 19 March 2026 02:06:04 +0000 (0:00:00.618) 0:01:09.953 ******** 2026-03-19 02:06:08.328845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 02:06:08.328881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 02:06:08.328901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 02:06:08.328919 | orchestrator | 2026-03-19 02:06:08.328936 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-19 02:06:08.328954 | orchestrator | Thursday 19 March 2026 02:06:06 +0000 (0:00:02.631) 0:01:12.584 ******** 2026-03-19 02:06:08.328986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 02:06:08.329006 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:08.329024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 02:06:08.329042 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:08.329075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 02:06:15.481319 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:15.481447 | orchestrator | 2026-03-19 02:06:15.481464 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-19 02:06:15.481478 | orchestrator | Thursday 19 March 2026 02:06:08 +0000 (0:00:01.328) 0:01:13.912 ******** 2026-03-19 02:06:15.481511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481542 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:15.481554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481603 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:15.481615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 02:06:15.481637 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:15.481648 | orchestrator | 2026-03-19 02:06:15.481659 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-19 02:06:15.481670 | orchestrator | Thursday 19 March 2026 02:06:09 +0000 (0:00:01.557) 0:01:15.470 ******** 2026-03-19 02:06:15.481681 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:15.481692 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:15.481703 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:15.481713 | orchestrator | 2026-03-19 02:06:15.481728 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-19 02:06:15.481740 | orchestrator | Thursday 19 March 2026 02:06:10 +0000 (0:00:00.402) 0:01:15.873 ******** 2026-03-19 02:06:15.481751 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:15.481762 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:15.481773 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:15.481783 | orchestrator | 2026-03-19 02:06:15.481794 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-19 02:06:15.481805 | orchestrator | Thursday 19 March 2026 02:06:11 +0000 (0:00:01.226) 0:01:17.099 ******** 2026-03-19 02:06:15.481815 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:15.481826 | orchestrator | 2026-03-19 02:06:15.481837 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-19 02:06:15.481851 | orchestrator | Thursday 19 March 2026 02:06:12 +0000 (0:00:00.852) 0:01:17.952 ******** 2026-03-19 02:06:15.481891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 02:06:15.481918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:15.481933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:15.481947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:15.481961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 02:06:15.481983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 02:06:16.101806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.101924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.101949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.101989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102293 | orchestrator | 2026-03-19 02:06:16.102307 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-19 02:06:16.102322 | orchestrator | Thursday 19 March 2026 02:06:15 +0000 (0:00:03.195) 0:01:21.147 ******** 2026-03-19 02:06:16.102338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 02:06:16.102354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:16.102393 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:16.102416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 02:06:16.102448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.261861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.261976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.261994 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:25.262009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 02:06:25.262081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.262177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.262255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.262269 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:25.262282 | orchestrator | 2026-03-19 02:06:25.262294 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-19 02:06:25.262307 | orchestrator | Thursday 19 March 2026 02:06:16 +0000 (0:00:00.648) 0:01:21.795 ******** 2026-03-19 02:06:25.262319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262347 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:25.262361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262388 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:25.262401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-19 02:06:25.262427 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:25.262440 | orchestrator | 2026-03-19 02:06:25.262453 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-19 02:06:25.262466 | orchestrator | Thursday 19 March 2026 02:06:17 +0000 (0:00:01.064) 0:01:22.860 ******** 2026-03-19 02:06:25.262480 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:25.262504 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:25.262515 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:25.262526 | orchestrator | 2026-03-19 02:06:25.262537 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-19 02:06:25.262548 | orchestrator | Thursday 19 March 2026 02:06:18 +0000 (0:00:01.308) 0:01:24.168 ******** 2026-03-19 02:06:25.262559 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:25.262571 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:25.262582 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:25.262593 | orchestrator | 2026-03-19 02:06:25.262604 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-19 02:06:25.262615 | orchestrator | Thursday 19 March 2026 02:06:20 +0000 (0:00:01.955) 0:01:26.124 ******** 2026-03-19 02:06:25.262626 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:25.262637 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:25.262648 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:25.262659 | orchestrator | 2026-03-19 02:06:25.262670 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-19 02:06:25.262681 | orchestrator | Thursday 19 March 2026 02:06:20 +0000 (0:00:00.289) 0:01:26.414 ******** 2026-03-19 02:06:25.262692 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:25.262709 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:25.262727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:25.262755 | orchestrator | 2026-03-19 02:06:25.262773 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-19 02:06:25.262792 | orchestrator | Thursday 19 March 2026 02:06:21 +0000 (0:00:00.285) 0:01:26.700 ******** 2026-03-19 02:06:25.262810 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:25.262827 | orchestrator | 2026-03-19 02:06:25.262843 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-19 02:06:25.262867 | orchestrator | Thursday 19 March 2026 02:06:22 +0000 (0:00:00.931) 0:01:27.632 ******** 2026-03-19 02:06:25.262902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 02:06:25.492239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:25.492332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 02:06:25.492458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:25.492491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:25.492528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 02:06:25.492548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:26.058662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058785 | orchestrator | 2026-03-19 02:06:26.058799 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-19 02:06:26.058811 | orchestrator | Thursday 19 March 2026 02:06:25 +0000 (0:00:03.449) 0:01:31.081 ******** 2026-03-19 02:06:26.058823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 02:06:26.058836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:26.058848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.058890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.485869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 02:06:26.485983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486000 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:26.486078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:26.486700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486839 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:26.486851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 02:06:26.486863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 02:06:26.486875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 02:06:26.486916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 02:06:36.072795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:06:36.072881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 02:06:36.072888 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:36.072894 | orchestrator | 2026-03-19 02:06:36.072899 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-19 02:06:36.072905 | orchestrator | Thursday 19 March 2026 02:06:26 +0000 (0:00:00.993) 0:01:32.075 ******** 2026-03-19 02:06:36.072910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072921 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:36.072925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072933 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:36.072937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-19 02:06:36.072963 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:36.072967 | orchestrator | 2026-03-19 02:06:36.072971 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-19 02:06:36.072975 | orchestrator | Thursday 19 March 2026 02:06:27 +0000 (0:00:01.218) 0:01:33.293 ******** 2026-03-19 02:06:36.072979 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:36.072983 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:36.072987 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:36.072991 | orchestrator | 2026-03-19 02:06:36.072994 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-19 02:06:36.072998 | orchestrator | Thursday 19 March 2026 02:06:28 +0000 (0:00:01.298) 0:01:34.591 ******** 2026-03-19 02:06:36.073002 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:36.073006 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:36.073010 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:36.073014 | orchestrator | 2026-03-19 02:06:36.073017 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-19 02:06:36.073021 | orchestrator | Thursday 19 March 2026 02:06:31 +0000 (0:00:02.018) 0:01:36.610 ******** 2026-03-19 02:06:36.073025 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:36.073029 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:36.073033 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:36.073037 | orchestrator | 2026-03-19 02:06:36.073040 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-19 02:06:36.073044 | orchestrator | Thursday 19 March 2026 02:06:31 +0000 (0:00:00.316) 0:01:36.926 ******** 2026-03-19 02:06:36.073048 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:36.073052 | orchestrator | 2026-03-19 02:06:36.073056 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-19 02:06:36.073060 | orchestrator | Thursday 19 March 2026 02:06:32 +0000 (0:00:00.950) 0:01:37.877 ******** 2026-03-19 02:06:36.073081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 02:06:36.073088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:36.073102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 02:06:38.835014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:38.835249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 02:06:38.835296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:38.835320 | orchestrator | 2026-03-19 02:06:38.835333 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-19 02:06:38.835346 | orchestrator | Thursday 19 March 2026 02:06:36 +0000 (0:00:03.896) 0:01:41.774 ******** 2026-03-19 02:06:38.835366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 02:06:38.835388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:42.056321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:42.056444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 02:06:42.056484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:42.056522 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:42.056560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 02:06:42.056597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 02:06:42.056638 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:42.056658 | orchestrator | 2026-03-19 02:06:42.056678 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-19 02:06:42.056695 | orchestrator | Thursday 19 March 2026 02:06:38 +0000 (0:00:02.743) 0:01:44.518 ******** 2026-03-19 02:06:42.056714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:42.056748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:49.995785 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:49.995880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:49.995893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:49.995905 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:49.995915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:49.995942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 02:06:49.995952 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:49.995962 | orchestrator | 2026-03-19 02:06:49.995972 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-19 02:06:49.995982 | orchestrator | Thursday 19 March 2026 02:06:42 +0000 (0:00:03.129) 0:01:47.647 ******** 2026-03-19 02:06:49.996018 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:49.996027 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:49.996036 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:49.996045 | orchestrator | 2026-03-19 02:06:49.996055 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-19 02:06:49.996062 | orchestrator | Thursday 19 March 2026 02:06:43 +0000 (0:00:01.279) 0:01:48.926 ******** 2026-03-19 02:06:49.996067 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:49.996072 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:49.996078 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:49.996083 | orchestrator | 2026-03-19 02:06:49.996089 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-19 02:06:49.996094 | orchestrator | Thursday 19 March 2026 02:06:45 +0000 (0:00:01.883) 0:01:50.809 ******** 2026-03-19 02:06:49.996099 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:49.996104 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:49.996110 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:49.996115 | orchestrator | 2026-03-19 02:06:49.996120 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-19 02:06:49.996126 | orchestrator | Thursday 19 March 2026 02:06:45 +0000 (0:00:00.269) 0:01:51.078 ******** 2026-03-19 02:06:49.996131 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:49.996137 | orchestrator | 2026-03-19 02:06:49.996142 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-19 02:06:49.996147 | orchestrator | Thursday 19 March 2026 02:06:46 +0000 (0:00:00.986) 0:01:52.064 ******** 2026-03-19 02:06:49.996191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 02:06:49.996201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 02:06:49.996207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 02:06:49.996213 | orchestrator | 2026-03-19 02:06:49.996218 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-19 02:06:49.996225 | orchestrator | Thursday 19 March 2026 02:06:49 +0000 (0:00:02.865) 0:01:54.930 ******** 2026-03-19 02:06:49.996236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 02:06:49.996243 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:49.996249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 02:06:49.996254 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:49.996260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 02:06:49.996322 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:49.996334 | orchestrator | 2026-03-19 02:06:49.996341 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-19 02:06:49.996347 | orchestrator | Thursday 19 March 2026 02:06:49 +0000 (0:00:00.396) 0:01:55.327 ******** 2026-03-19 02:06:49.996354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:49.996368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:58.560363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:58.560501 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:58.560527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:58.560547 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:58.560563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:58.560579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-19 02:06:58.560629 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:58.560644 | orchestrator | 2026-03-19 02:06:58.560660 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-19 02:06:58.560678 | orchestrator | Thursday 19 March 2026 02:06:50 +0000 (0:00:00.884) 0:01:56.212 ******** 2026-03-19 02:06:58.560694 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:58.560710 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:58.560726 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:58.560742 | orchestrator | 2026-03-19 02:06:58.560757 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-19 02:06:58.560773 | orchestrator | Thursday 19 March 2026 02:06:51 +0000 (0:00:01.338) 0:01:57.550 ******** 2026-03-19 02:06:58.560789 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:06:58.560799 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:06:58.560809 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:06:58.560818 | orchestrator | 2026-03-19 02:06:58.560828 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-19 02:06:58.560854 | orchestrator | Thursday 19 March 2026 02:06:54 +0000 (0:00:02.115) 0:01:59.666 ******** 2026-03-19 02:06:58.560864 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:06:58.560874 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:06:58.560883 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:06:58.560893 | orchestrator | 2026-03-19 02:06:58.560905 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-19 02:06:58.560916 | orchestrator | Thursday 19 March 2026 02:06:54 +0000 (0:00:00.283) 0:01:59.950 ******** 2026-03-19 02:06:58.560927 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:06:58.560938 | orchestrator | 2026-03-19 02:06:58.560950 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-19 02:06:58.560961 | orchestrator | Thursday 19 March 2026 02:06:55 +0000 (0:00:01.088) 0:02:01.038 ******** 2026-03-19 02:06:58.561002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 02:06:58.561036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 02:06:58.561058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 02:07:00.110628 | orchestrator | 2026-03-19 02:07:00.110780 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-19 02:07:00.110808 | orchestrator | Thursday 19 March 2026 02:06:58 +0000 (0:00:03.109) 0:02:04.148 ******** 2026-03-19 02:07:00.110859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 02:07:00.110886 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:00.110935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 02:07:00.110998 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:00.111029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 02:07:00.111049 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:00.111067 | orchestrator | 2026-03-19 02:07:00.111084 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-19 02:07:00.111102 | orchestrator | Thursday 19 March 2026 02:06:59 +0000 (0:00:00.648) 0:02:04.797 ******** 2026-03-19 02:07:00.111121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:00.111193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:00.111222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:00.111260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:08.478935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 02:07:08.479032 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:08.479044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:08.479055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:08.479079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:08.479086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:08.479093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 02:07:08.479098 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:08.479104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:08.479110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:08.479116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-19 02:07:08.479141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 02:07:08.479147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 02:07:08.479153 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:08.479158 | orchestrator | 2026-03-19 02:07:08.479206 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-19 02:07:08.479214 | orchestrator | Thursday 19 March 2026 02:07:00 +0000 (0:00:00.903) 0:02:05.700 ******** 2026-03-19 02:07:08.479219 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:08.479224 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:08.479230 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:08.479235 | orchestrator | 2026-03-19 02:07:08.479241 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-19 02:07:08.479246 | orchestrator | Thursday 19 March 2026 02:07:01 +0000 (0:00:01.605) 0:02:07.306 ******** 2026-03-19 02:07:08.479252 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:08.479257 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:08.479263 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:08.479268 | orchestrator | 2026-03-19 02:07:08.479273 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-19 02:07:08.479279 | orchestrator | Thursday 19 March 2026 02:07:03 +0000 (0:00:02.016) 0:02:09.322 ******** 2026-03-19 02:07:08.479284 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:08.479290 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:08.479308 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:08.479313 | orchestrator | 2026-03-19 02:07:08.479322 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-19 02:07:08.479331 | orchestrator | Thursday 19 March 2026 02:07:04 +0000 (0:00:00.300) 0:02:09.622 ******** 2026-03-19 02:07:08.479339 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:08.479347 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:08.479356 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:08.479364 | orchestrator | 2026-03-19 02:07:08.479373 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-19 02:07:08.479382 | orchestrator | Thursday 19 March 2026 02:07:04 +0000 (0:00:00.277) 0:02:09.900 ******** 2026-03-19 02:07:08.479387 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:08.479392 | orchestrator | 2026-03-19 02:07:08.479398 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-19 02:07:08.479403 | orchestrator | Thursday 19 March 2026 02:07:05 +0000 (0:00:01.092) 0:02:10.993 ******** 2026-03-19 02:07:08.479417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:07:08.479433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:08.479440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:08.479447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:07:08.479458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:09.046405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:09.046495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:07:09.046528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:09.046537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:09.046545 | orchestrator | 2026-03-19 02:07:09.046553 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-19 02:07:09.046562 | orchestrator | Thursday 19 March 2026 02:07:08 +0000 (0:00:03.077) 0:02:14.070 ******** 2026-03-19 02:07:09.046584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:07:09.046596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:09.046604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:09.046619 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:09.046628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:07:09.046636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:09.046643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:09.046650 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:09.046666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:07:18.091472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:07:18.091607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:07:18.091635 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:18.091647 | orchestrator | 2026-03-19 02:07:18.091658 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-19 02:07:18.091669 | orchestrator | Thursday 19 March 2026 02:07:09 +0000 (0:00:00.555) 0:02:14.625 ******** 2026-03-19 02:07:18.091680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091702 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:18.091712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091731 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:18.091740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-19 02:07:18.091758 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:18.091802 | orchestrator | 2026-03-19 02:07:18.091819 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-19 02:07:18.091834 | orchestrator | Thursday 19 March 2026 02:07:10 +0000 (0:00:00.991) 0:02:15.617 ******** 2026-03-19 02:07:18.091848 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:18.091863 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:18.091912 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:18.091929 | orchestrator | 2026-03-19 02:07:18.091943 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-19 02:07:18.091958 | orchestrator | Thursday 19 March 2026 02:07:11 +0000 (0:00:01.333) 0:02:16.951 ******** 2026-03-19 02:07:18.091972 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:18.091988 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:18.092002 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:18.092016 | orchestrator | 2026-03-19 02:07:18.092032 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-19 02:07:18.092049 | orchestrator | Thursday 19 March 2026 02:07:13 +0000 (0:00:01.978) 0:02:18.930 ******** 2026-03-19 02:07:18.092065 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:18.092099 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:18.092115 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:18.092130 | orchestrator | 2026-03-19 02:07:18.092145 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-19 02:07:18.092218 | orchestrator | Thursday 19 March 2026 02:07:13 +0000 (0:00:00.283) 0:02:19.214 ******** 2026-03-19 02:07:18.092237 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:18.092254 | orchestrator | 2026-03-19 02:07:18.092269 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-19 02:07:18.092286 | orchestrator | Thursday 19 March 2026 02:07:14 +0000 (0:00:01.155) 0:02:20.369 ******** 2026-03-19 02:07:18.092298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 02:07:18.092312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:18.092322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 02:07:18.092342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:18.092362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 02:07:23.291028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:23.291121 | orchestrator | 2026-03-19 02:07:23.291132 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-19 02:07:23.291140 | orchestrator | Thursday 19 March 2026 02:07:18 +0000 (0:00:03.305) 0:02:23.674 ******** 2026-03-19 02:07:23.291149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 02:07:23.291243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:23.291275 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:23.291289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 02:07:23.291321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:23.291336 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:23.291347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 02:07:23.291357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:07:23.291376 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:23.291386 | orchestrator | 2026-03-19 02:07:23.291397 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-19 02:07:23.291407 | orchestrator | Thursday 19 March 2026 02:07:18 +0000 (0:00:00.639) 0:02:24.314 ******** 2026-03-19 02:07:23.291418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:23.291452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291473 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:23.291483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-19 02:07:23.291504 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:23.291511 | orchestrator | 2026-03-19 02:07:23.291522 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-19 02:07:23.291529 | orchestrator | Thursday 19 March 2026 02:07:19 +0000 (0:00:00.905) 0:02:25.219 ******** 2026-03-19 02:07:23.291535 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:23.291541 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:23.291547 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:23.291553 | orchestrator | 2026-03-19 02:07:23.291560 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-19 02:07:23.291566 | orchestrator | Thursday 19 March 2026 02:07:21 +0000 (0:00:01.644) 0:02:26.863 ******** 2026-03-19 02:07:23.291572 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:23.291579 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:23.291587 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:23.291594 | orchestrator | 2026-03-19 02:07:23.291601 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-19 02:07:23.291615 | orchestrator | Thursday 19 March 2026 02:07:23 +0000 (0:00:02.012) 0:02:28.875 ******** 2026-03-19 02:07:27.569435 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:27.570346 | orchestrator | 2026-03-19 02:07:27.570375 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-19 02:07:27.570383 | orchestrator | Thursday 19 March 2026 02:07:24 +0000 (0:00:00.983) 0:02:29.859 ******** 2026-03-19 02:07:27.570393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 02:07:27.570426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 02:07:27.570462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 02:07:27.570519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:27.570542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519408 | orchestrator | 2026-03-19 02:07:28.519577 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-19 02:07:28.519611 | orchestrator | Thursday 19 March 2026 02:07:27 +0000 (0:00:03.378) 0:02:33.237 ******** 2026-03-19 02:07:28.519677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 02:07:28.519707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519775 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:28.519822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 02:07:28.519877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.519958 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:28.519975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 02:07:28.519993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.520024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 02:07:28.520061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 02:07:39.402243 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:39.402424 | orchestrator | 2026-03-19 02:07:39.402445 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-19 02:07:39.402459 | orchestrator | Thursday 19 March 2026 02:07:28 +0000 (0:00:00.948) 0:02:34.185 ******** 2026-03-19 02:07:39.402472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402501 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:39.402514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402537 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:39.402548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-19 02:07:39.402571 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:39.402582 | orchestrator | 2026-03-19 02:07:39.402593 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-19 02:07:39.402604 | orchestrator | Thursday 19 March 2026 02:07:29 +0000 (0:00:00.861) 0:02:35.047 ******** 2026-03-19 02:07:39.402615 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:39.402626 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:39.402637 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:39.402648 | orchestrator | 2026-03-19 02:07:39.402658 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-19 02:07:39.402669 | orchestrator | Thursday 19 March 2026 02:07:30 +0000 (0:00:01.364) 0:02:36.411 ******** 2026-03-19 02:07:39.402680 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:39.402691 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:39.402702 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:39.402713 | orchestrator | 2026-03-19 02:07:39.402724 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-19 02:07:39.402735 | orchestrator | Thursday 19 March 2026 02:07:32 +0000 (0:00:02.018) 0:02:38.430 ******** 2026-03-19 02:07:39.402746 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:39.402757 | orchestrator | 2026-03-19 02:07:39.402767 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-19 02:07:39.402778 | orchestrator | Thursday 19 March 2026 02:07:34 +0000 (0:00:01.295) 0:02:39.725 ******** 2026-03-19 02:07:39.402790 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 02:07:39.402801 | orchestrator | 2026-03-19 02:07:39.402812 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-19 02:07:39.402851 | orchestrator | Thursday 19 March 2026 02:07:37 +0000 (0:00:03.099) 0:02:42.825 ******** 2026-03-19 02:07:39.402909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:39.402928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:39.402941 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:39.402959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:39.402982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:39.402993 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:39.403016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:41.685632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:41.685755 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:41.685779 | orchestrator | 2026-03-19 02:07:41.685793 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-19 02:07:41.685805 | orchestrator | Thursday 19 March 2026 02:07:39 +0000 (0:00:02.153) 0:02:44.979 ******** 2026-03-19 02:07:41.685871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:41.685886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:41.685898 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:41.685932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:41.685962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:41.685974 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:41.685986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:07:41.686006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 02:07:50.972572 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:50.972695 | orchestrator | 2026-03-19 02:07:50.972709 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-19 02:07:50.972717 | orchestrator | Thursday 19 March 2026 02:07:41 +0000 (0:00:02.292) 0:02:47.271 ******** 2026-03-19 02:07:50.972726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972785 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:50.972791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972804 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:50.972810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 02:07:50.972823 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:50.972829 | orchestrator | 2026-03-19 02:07:50.972835 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-19 02:07:50.972841 | orchestrator | Thursday 19 March 2026 02:07:44 +0000 (0:00:02.615) 0:02:49.887 ******** 2026-03-19 02:07:50.972847 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:07:50.972876 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:07:50.972883 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:07:50.972888 | orchestrator | 2026-03-19 02:07:50.972894 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-19 02:07:50.972900 | orchestrator | Thursday 19 March 2026 02:07:46 +0000 (0:00:02.008) 0:02:51.895 ******** 2026-03-19 02:07:50.972906 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:50.972912 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:50.972917 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:50.972923 | orchestrator | 2026-03-19 02:07:50.972929 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-19 02:07:50.972935 | orchestrator | Thursday 19 March 2026 02:07:47 +0000 (0:00:01.353) 0:02:53.249 ******** 2026-03-19 02:07:50.972941 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:50.972947 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:50.972952 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:50.972958 | orchestrator | 2026-03-19 02:07:50.972965 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-19 02:07:50.972971 | orchestrator | Thursday 19 March 2026 02:07:47 +0000 (0:00:00.294) 0:02:53.543 ******** 2026-03-19 02:07:50.972976 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:50.972982 | orchestrator | 2026-03-19 02:07:50.972988 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-19 02:07:50.972994 | orchestrator | Thursday 19 March 2026 02:07:49 +0000 (0:00:01.278) 0:02:54.822 ******** 2026-03-19 02:07:50.973006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 02:07:50.973016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 02:07:50.973022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 02:07:50.973029 | orchestrator | 2026-03-19 02:07:50.973035 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-19 02:07:50.973047 | orchestrator | Thursday 19 March 2026 02:07:50 +0000 (0:00:01.555) 0:02:56.377 ******** 2026-03-19 02:07:50.973058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 02:07:58.800308 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:58.800434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 02:07:58.800454 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:58.800467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 02:07:58.800474 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:58.800481 | orchestrator | 2026-03-19 02:07:58.800489 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-19 02:07:58.800496 | orchestrator | Thursday 19 March 2026 02:07:51 +0000 (0:00:00.378) 0:02:56.756 ******** 2026-03-19 02:07:58.800505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 02:07:58.800513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 02:07:58.800520 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:58.800526 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:58.800533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 02:07:58.800563 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:58.800570 | orchestrator | 2026-03-19 02:07:58.800615 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-19 02:07:58.800622 | orchestrator | Thursday 19 March 2026 02:07:51 +0000 (0:00:00.802) 0:02:57.559 ******** 2026-03-19 02:07:58.800628 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:58.800635 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:58.800641 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:58.800647 | orchestrator | 2026-03-19 02:07:58.800653 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-19 02:07:58.800659 | orchestrator | Thursday 19 March 2026 02:07:52 +0000 (0:00:00.424) 0:02:57.983 ******** 2026-03-19 02:07:58.800665 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:58.800672 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:58.800678 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:58.800684 | orchestrator | 2026-03-19 02:07:58.800690 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-19 02:07:58.800696 | orchestrator | Thursday 19 March 2026 02:07:53 +0000 (0:00:01.237) 0:02:59.221 ******** 2026-03-19 02:07:58.800702 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:07:58.800709 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:07:58.800715 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:07:58.800721 | orchestrator | 2026-03-19 02:07:58.800727 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-19 02:07:58.800733 | orchestrator | Thursday 19 March 2026 02:07:53 +0000 (0:00:00.291) 0:02:59.512 ******** 2026-03-19 02:07:58.800739 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:07:58.800746 | orchestrator | 2026-03-19 02:07:58.800752 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-19 02:07:58.800758 | orchestrator | Thursday 19 March 2026 02:07:55 +0000 (0:00:01.362) 0:03:00.875 ******** 2026-03-19 02:07:58.800781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:07:58.800793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:58.800803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:58.800818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:58.800827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:07:58.800842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:07:59.025696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.025757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.025779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:07:59.025888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.025931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:07:59.025946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:07:59.025972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.136350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.136468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.136504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:07:59.136517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:07:59.136527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.136537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.136564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.136579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:07:59.136595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:07:59.136604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.136613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.136627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.399310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:07:59.399424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.399435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.399441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:07:59.399448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.399454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.399484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:07:59.399497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.399504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:07:59.399509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:07:59.399514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:07:59.399521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:07:59.399535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.429368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:08:00.429466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.429479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.429492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:08:00.429505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:00.429515 | orchestrator | 2026-03-19 02:08:00.429526 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-19 02:08:00.429562 | orchestrator | Thursday 19 March 2026 02:07:59 +0000 (0:00:04.113) 0:03:04.989 ******** 2026-03-19 02:08:00.429603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:08:00.429615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.429627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.429644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.429660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:08:00.429697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:08:00.528520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.528532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.528549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:00.528634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:08:00.528642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.528670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:08:00.633855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.633947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.633960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.633970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.633979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:08:00.634010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.634142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:00.634154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:00.634162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.634171 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:00.634221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:08:00.634239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:08:00.634253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.871999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.872150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.872233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.872340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:08:00.872407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.872470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:00.872494 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:00.872516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-19 02:08:00.872534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.872554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.872588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:00.872608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:00.872650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:10.608877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:10.608997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-19 02:08:10.609015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-19 02:08:10.609029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 02:08:10.609072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 02:08:10.609105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:08:10.609118 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:10.609132 | orchestrator | 2026-03-19 02:08:10.609161 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-19 02:08:10.609174 | orchestrator | Thursday 19 March 2026 02:08:00 +0000 (0:00:01.472) 0:03:06.462 ******** 2026-03-19 02:08:10.609242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609272 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:10.609283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609304 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:10.609315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-19 02:08:10.609405 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:10.609420 | orchestrator | 2026-03-19 02:08:10.609433 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-19 02:08:10.609446 | orchestrator | Thursday 19 March 2026 02:08:02 +0000 (0:00:01.923) 0:03:08.385 ******** 2026-03-19 02:08:10.609460 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:10.609473 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:10.609486 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:10.609498 | orchestrator | 2026-03-19 02:08:10.609511 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-19 02:08:10.609524 | orchestrator | Thursday 19 March 2026 02:08:04 +0000 (0:00:01.308) 0:03:09.693 ******** 2026-03-19 02:08:10.609537 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:10.609550 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:10.609563 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:10.609575 | orchestrator | 2026-03-19 02:08:10.609588 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-19 02:08:10.609600 | orchestrator | Thursday 19 March 2026 02:08:06 +0000 (0:00:02.041) 0:03:11.735 ******** 2026-03-19 02:08:10.609611 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:08:10.609621 | orchestrator | 2026-03-19 02:08:10.609632 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-19 02:08:10.609643 | orchestrator | Thursday 19 March 2026 02:08:07 +0000 (0:00:01.162) 0:03:12.898 ******** 2026-03-19 02:08:10.609656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:10.609685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:21.426723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:21.426832 | orchestrator | 2026-03-19 02:08:21.426844 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-19 02:08:21.426852 | orchestrator | Thursday 19 March 2026 02:08:10 +0000 (0:00:03.298) 0:03:16.196 ******** 2026-03-19 02:08:21.426860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:08:21.426867 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:21.426876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:08:21.426883 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:21.426896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:08:21.426903 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:21.426910 | orchestrator | 2026-03-19 02:08:21.426917 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-19 02:08:21.426936 | orchestrator | Thursday 19 March 2026 02:08:11 +0000 (0:00:00.517) 0:03:16.714 ******** 2026-03-19 02:08:21.426945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.426958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.426967 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:21.426974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.426981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.426988 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:21.426995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.427002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-19 02:08:21.427008 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:21.427015 | orchestrator | 2026-03-19 02:08:21.427022 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-19 02:08:21.427028 | orchestrator | Thursday 19 March 2026 02:08:11 +0000 (0:00:00.730) 0:03:17.444 ******** 2026-03-19 02:08:21.427035 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:21.427042 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:21.427048 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:21.427055 | orchestrator | 2026-03-19 02:08:21.427061 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-19 02:08:21.427068 | orchestrator | Thursday 19 March 2026 02:08:13 +0000 (0:00:01.853) 0:03:19.298 ******** 2026-03-19 02:08:21.427075 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:21.427081 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:21.427088 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:21.427095 | orchestrator | 2026-03-19 02:08:21.427101 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-19 02:08:21.427108 | orchestrator | Thursday 19 March 2026 02:08:15 +0000 (0:00:01.803) 0:03:21.102 ******** 2026-03-19 02:08:21.427115 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:08:21.427121 | orchestrator | 2026-03-19 02:08:21.427128 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-19 02:08:21.427135 | orchestrator | Thursday 19 March 2026 02:08:17 +0000 (0:00:01.559) 0:03:22.662 ******** 2026-03-19 02:08:21.427144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:21.427165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:22.337711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:08:22.337809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337826 | orchestrator | 2026-03-19 02:08:22.337835 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-19 02:08:22.337843 | orchestrator | Thursday 19 March 2026 02:08:21 +0000 (0:00:04.349) 0:03:27.011 ******** 2026-03-19 02:08:22.337852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:08:22.337866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:22.337887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:25.062108 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:25.062321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:08:25.062348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:25.062367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:25.062385 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:25.062427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:08:25.062507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:08:25.062529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:08:25.062546 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:25.062563 | orchestrator | 2026-03-19 02:08:25.062581 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-19 02:08:25.062600 | orchestrator | Thursday 19 March 2026 02:08:22 +0000 (0:00:00.914) 0:03:27.926 ******** 2026-03-19 02:08:25.062619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062696 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:25.062711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062790 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:25.062806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-19 02:08:25.062881 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:25.062898 | orchestrator | 2026-03-19 02:08:25.062917 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-19 02:08:25.062933 | orchestrator | Thursday 19 March 2026 02:08:23 +0000 (0:00:01.260) 0:03:29.187 ******** 2026-03-19 02:08:25.062950 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:25.062979 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:43.520062 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:43.520168 | orchestrator | 2026-03-19 02:08:43.520180 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-19 02:08:43.520190 | orchestrator | Thursday 19 March 2026 02:08:25 +0000 (0:00:01.463) 0:03:30.650 ******** 2026-03-19 02:08:43.520197 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:43.520205 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:43.520212 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:43.520268 | orchestrator | 2026-03-19 02:08:43.520275 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-19 02:08:43.520282 | orchestrator | Thursday 19 March 2026 02:08:27 +0000 (0:00:02.211) 0:03:32.861 ******** 2026-03-19 02:08:43.520289 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:08:43.520296 | orchestrator | 2026-03-19 02:08:43.520303 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-19 02:08:43.520310 | orchestrator | Thursday 19 March 2026 02:08:28 +0000 (0:00:01.514) 0:03:34.376 ******** 2026-03-19 02:08:43.520318 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-19 02:08:43.520326 | orchestrator | 2026-03-19 02:08:43.520333 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-19 02:08:43.520340 | orchestrator | Thursday 19 March 2026 02:08:29 +0000 (0:00:00.802) 0:03:35.179 ******** 2026-03-19 02:08:43.520351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 02:08:43.520391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 02:08:43.520399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 02:08:43.520406 | orchestrator | 2026-03-19 02:08:43.520414 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-19 02:08:43.520422 | orchestrator | Thursday 19 March 2026 02:08:33 +0000 (0:00:03.901) 0:03:39.080 ******** 2026-03-19 02:08:43.520429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:08:43.520436 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:43.520459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:08:43.520467 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:43.520492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:08:43.520499 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:43.520506 | orchestrator | 2026-03-19 02:08:43.520513 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-19 02:08:43.520519 | orchestrator | Thursday 19 March 2026 02:08:34 +0000 (0:00:01.376) 0:03:40.457 ******** 2026-03-19 02:08:43.520527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520550 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:43.520557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520571 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:08:43.520578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 02:08:43.520592 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:08:43.520599 | orchestrator | 2026-03-19 02:08:43.520606 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 02:08:43.520612 | orchestrator | Thursday 19 March 2026 02:08:36 +0000 (0:00:01.420) 0:03:41.877 ******** 2026-03-19 02:08:43.520618 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:43.520625 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:43.520632 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:43.520639 | orchestrator | 2026-03-19 02:08:43.520646 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 02:08:43.520653 | orchestrator | Thursday 19 March 2026 02:08:38 +0000 (0:00:02.364) 0:03:44.242 ******** 2026-03-19 02:08:43.520660 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:08:43.520668 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:08:43.520675 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:08:43.520682 | orchestrator | 2026-03-19 02:08:43.520690 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-19 02:08:43.520697 | orchestrator | Thursday 19 March 2026 02:08:41 +0000 (0:00:02.836) 0:03:47.078 ******** 2026-03-19 02:08:43.520705 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-19 02:08:43.520713 | orchestrator | 2026-03-19 02:08:43.520720 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-19 02:08:43.520728 | orchestrator | Thursday 19 March 2026 02:08:42 +0000 (0:00:01.061) 0:03:48.140 ******** 2026-03-19 02:08:43.520740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:08:43.520748 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:08:43.520763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:09:02.209411 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.209527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:09:02.209546 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.209556 | orchestrator | 2026-03-19 02:09:02.209567 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-19 02:09:02.209576 | orchestrator | Thursday 19 March 2026 02:08:43 +0000 (0:00:00.962) 0:03:49.102 ******** 2026-03-19 02:09:02.209586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:09:02.209596 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:02.209606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:09:02.209615 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.209624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 02:09:02.209634 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.209643 | orchestrator | 2026-03-19 02:09:02.209652 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-19 02:09:02.209661 | orchestrator | Thursday 19 March 2026 02:08:44 +0000 (0:00:01.183) 0:03:50.286 ******** 2026-03-19 02:09:02.209670 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:02.209679 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.209688 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.209697 | orchestrator | 2026-03-19 02:09:02.209706 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 02:09:02.209715 | orchestrator | Thursday 19 March 2026 02:08:46 +0000 (0:00:01.452) 0:03:51.738 ******** 2026-03-19 02:09:02.209724 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:09:02.209734 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:09:02.209742 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:09:02.209751 | orchestrator | 2026-03-19 02:09:02.209760 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 02:09:02.209769 | orchestrator | Thursday 19 March 2026 02:08:48 +0000 (0:00:02.614) 0:03:54.353 ******** 2026-03-19 02:09:02.209806 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:09:02.209815 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:09:02.209824 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:09:02.209832 | orchestrator | 2026-03-19 02:09:02.209857 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-19 02:09:02.209866 | orchestrator | Thursday 19 March 2026 02:08:51 +0000 (0:00:02.760) 0:03:57.114 ******** 2026-03-19 02:09:02.209875 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-19 02:09:02.209888 | orchestrator | 2026-03-19 02:09:02.209898 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-19 02:09:02.209909 | orchestrator | Thursday 19 March 2026 02:08:52 +0000 (0:00:01.122) 0:03:58.236 ******** 2026-03-19 02:09:02.209935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.209946 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:02.209957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.209968 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.209978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.209989 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.209999 | orchestrator | 2026-03-19 02:09:02.210010 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-19 02:09:02.210079 | orchestrator | Thursday 19 March 2026 02:08:53 +0000 (0:00:01.197) 0:03:59.434 ******** 2026-03-19 02:09:02.210091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.210102 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:02.210113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.210131 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.210140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 02:09:02.210149 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.210158 | orchestrator | 2026-03-19 02:09:02.210182 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-19 02:09:02.210191 | orchestrator | Thursday 19 March 2026 02:08:55 +0000 (0:00:01.224) 0:04:00.659 ******** 2026-03-19 02:09:02.210200 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:02.210208 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:02.210217 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:02.210226 | orchestrator | 2026-03-19 02:09:02.210253 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 02:09:02.210262 | orchestrator | Thursday 19 March 2026 02:08:56 +0000 (0:00:01.695) 0:04:02.354 ******** 2026-03-19 02:09:02.210271 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:09:02.210280 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:09:02.210288 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:09:02.210297 | orchestrator | 2026-03-19 02:09:02.210306 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 02:09:02.210314 | orchestrator | Thursday 19 March 2026 02:08:59 +0000 (0:00:02.283) 0:04:04.638 ******** 2026-03-19 02:09:02.210323 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:09:02.210332 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:09:02.210347 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:09:06.975525 | orchestrator | 2026-03-19 02:09:06.975650 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-19 02:09:06.975665 | orchestrator | Thursday 19 March 2026 02:09:02 +0000 (0:00:03.156) 0:04:07.794 ******** 2026-03-19 02:09:06.975676 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:09:06.975687 | orchestrator | 2026-03-19 02:09:06.975697 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-19 02:09:06.975707 | orchestrator | Thursday 19 March 2026 02:09:03 +0000 (0:00:01.282) 0:04:09.077 ******** 2026-03-19 02:09:06.975721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:06.975735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:06.975773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:06.975787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:06.975815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:06.975844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:06.975856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:06.975867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:06.975884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:06.975895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:06.975905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:06.975922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:07.661050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:07.661427 | orchestrator | 2026-03-19 02:09:07.661513 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-19 02:09:07.661538 | orchestrator | Thursday 19 March 2026 02:09:07 +0000 (0:00:03.616) 0:04:12.693 ******** 2026-03-19 02:09:07.661563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 02:09:07.661593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:07.661645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:07.661726 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:07.661746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 02:09:07.661768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:07.661794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:07.661843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:19.186326 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:19.186421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 02:09:19.186431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 02:09:19.186437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 02:09:19.186457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 02:09:19.186464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 02:09:19.186468 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:19.186473 | orchestrator | 2026-03-19 02:09:19.186478 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-19 02:09:19.186484 | orchestrator | Thursday 19 March 2026 02:09:07 +0000 (0:00:00.686) 0:04:13.380 ******** 2026-03-19 02:09:19.186490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186521 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:19.186536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186544 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:19.186548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 02:09:19.186557 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:19.186561 | orchestrator | 2026-03-19 02:09:19.186565 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-19 02:09:19.186569 | orchestrator | Thursday 19 March 2026 02:09:08 +0000 (0:00:00.868) 0:04:14.248 ******** 2026-03-19 02:09:19.186573 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:09:19.186577 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:09:19.186581 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:09:19.186584 | orchestrator | 2026-03-19 02:09:19.186588 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-19 02:09:19.186592 | orchestrator | Thursday 19 March 2026 02:09:10 +0000 (0:00:01.747) 0:04:15.995 ******** 2026-03-19 02:09:19.186596 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:09:19.186600 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:09:19.186604 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:09:19.186608 | orchestrator | 2026-03-19 02:09:19.186612 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-19 02:09:19.186616 | orchestrator | Thursday 19 March 2026 02:09:12 +0000 (0:00:02.077) 0:04:18.073 ******** 2026-03-19 02:09:19.186620 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:09:19.186625 | orchestrator | 2026-03-19 02:09:19.186628 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-19 02:09:19.186632 | orchestrator | Thursday 19 March 2026 02:09:13 +0000 (0:00:01.350) 0:04:19.424 ******** 2026-03-19 02:09:19.186642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:09:19.186649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:09:19.186663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:09:20.304438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:09:20.304551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:09:20.304561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:09:20.304587 | orchestrator | 2026-03-19 02:09:20.304595 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-19 02:09:20.304601 | orchestrator | Thursday 19 March 2026 02:09:19 +0000 (0:00:05.349) 0:04:24.773 ******** 2026-03-19 02:09:20.304621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:09:20.304629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:09:20.304635 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:20.304646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:09:20.304652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:09:20.304663 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:20.304669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:09:20.304679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:09:27.233327 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:27.233459 | orchestrator | 2026-03-19 02:09:27.233477 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-19 02:09:27.233492 | orchestrator | Thursday 19 March 2026 02:09:20 +0000 (0:00:01.115) 0:04:25.888 ******** 2026-03-19 02:09:27.233504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 02:09:27.233519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233576 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:27.233636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 02:09:27.233649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233672 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:27.233683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-19 02:09:27.233694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-19 02:09:27.233716 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:27.233727 | orchestrator | 2026-03-19 02:09:27.233738 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-19 02:09:27.233749 | orchestrator | Thursday 19 March 2026 02:09:21 +0000 (0:00:00.858) 0:04:26.747 ******** 2026-03-19 02:09:27.233760 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:27.233771 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:27.233782 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:27.233793 | orchestrator | 2026-03-19 02:09:27.233803 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-19 02:09:27.233815 | orchestrator | Thursday 19 March 2026 02:09:21 +0000 (0:00:00.405) 0:04:27.153 ******** 2026-03-19 02:09:27.233828 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:27.233840 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:27.233852 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:27.233865 | orchestrator | 2026-03-19 02:09:27.233878 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-19 02:09:27.233891 | orchestrator | Thursday 19 March 2026 02:09:23 +0000 (0:00:01.615) 0:04:28.768 ******** 2026-03-19 02:09:27.233903 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:09:27.233916 | orchestrator | 2026-03-19 02:09:27.233929 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-19 02:09:27.233942 | orchestrator | Thursday 19 March 2026 02:09:24 +0000 (0:00:01.654) 0:04:30.423 ******** 2026-03-19 02:09:27.233989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 02:09:27.234093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 02:09:27.234129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:27.234153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:27.234173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:27.234193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:27.234212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:27.234232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:27.234321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:28.857114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:28.857301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 02:09:28.857335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:28.857356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:28.857372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:28.857384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:28.857442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 02:09:28.857466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:28.857478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 02:09:28.857491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:28.857502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:28.857530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.550695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.550726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 02:09:29.550786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:29.550833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.550888 | orchestrator | 2026-03-19 02:09:29.550902 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-19 02:09:29.550914 | orchestrator | Thursday 19 March 2026 02:09:28 +0000 (0:00:04.168) 0:04:34.592 ******** 2026-03-19 02:09:29.550926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 02:09:29.550939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:29.550962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.550993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.704949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 02:09:29.705060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:29.705076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.705115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 02:09:29.705129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.705165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:29.705178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.705191 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:29.705204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.705216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:29.705227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:29.705293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 02:09:29.705315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:31.459882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 02:09:31.460105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:31.460118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 02:09:31.460131 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:31.460145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 02:09:31.460213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 02:09:31.460227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-19 02:09:31.460294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 02:09:31.460319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 02:09:31.460330 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:31.460342 | orchestrator | 2026-03-19 02:09:31.460354 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-19 02:09:31.460369 | orchestrator | Thursday 19 March 2026 02:09:29 +0000 (0:00:00.849) 0:04:35.441 ******** 2026-03-19 02:09:31.460394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.290795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.290815 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:37.290836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.290929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.290947 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:37.290964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-19 02:09:37.290999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.291016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-19 02:09:37.291034 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:37.291051 | orchestrator | 2026-03-19 02:09:37.291070 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-19 02:09:37.291088 | orchestrator | Thursday 19 March 2026 02:09:31 +0000 (0:00:01.605) 0:04:37.047 ******** 2026-03-19 02:09:37.291105 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:37.291122 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:37.291139 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:37.291156 | orchestrator | 2026-03-19 02:09:37.291174 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-19 02:09:37.291190 | orchestrator | Thursday 19 March 2026 02:09:31 +0000 (0:00:00.486) 0:04:37.534 ******** 2026-03-19 02:09:37.291208 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:37.291225 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:37.291242 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:37.291373 | orchestrator | 2026-03-19 02:09:37.291393 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-19 02:09:37.291411 | orchestrator | Thursday 19 March 2026 02:09:33 +0000 (0:00:01.301) 0:04:38.835 ******** 2026-03-19 02:09:37.291430 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:09:37.291447 | orchestrator | 2026-03-19 02:09:37.291466 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-19 02:09:37.291484 | orchestrator | Thursday 19 March 2026 02:09:34 +0000 (0:00:01.687) 0:04:40.522 ******** 2026-03-19 02:09:37.291531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:09:37.291575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:09:37.291597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:09:37.291617 | orchestrator | 2026-03-19 02:09:37.291694 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-19 02:09:37.291715 | orchestrator | Thursday 19 March 2026 02:09:37 +0000 (0:00:02.153) 0:04:42.676 ******** 2026-03-19 02:09:37.291734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 02:09:37.291784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 02:09:47.571751 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:47.571861 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:47.571875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 02:09:47.571889 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:47.571899 | orchestrator | 2026-03-19 02:09:47.571908 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-19 02:09:47.571916 | orchestrator | Thursday 19 March 2026 02:09:37 +0000 (0:00:00.410) 0:04:43.086 ******** 2026-03-19 02:09:47.571926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 02:09:47.571935 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:47.571943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 02:09:47.571951 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:47.571959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 02:09:47.571967 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:47.571975 | orchestrator | 2026-03-19 02:09:47.571983 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-19 02:09:47.571991 | orchestrator | Thursday 19 March 2026 02:09:38 +0000 (0:00:00.652) 0:04:43.739 ******** 2026-03-19 02:09:47.571999 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:47.572006 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:47.572014 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:47.572022 | orchestrator | 2026-03-19 02:09:47.572030 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-19 02:09:47.572038 | orchestrator | Thursday 19 March 2026 02:09:38 +0000 (0:00:00.800) 0:04:44.540 ******** 2026-03-19 02:09:47.572046 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:47.572078 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:47.572086 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:47.572094 | orchestrator | 2026-03-19 02:09:47.572102 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-19 02:09:47.572110 | orchestrator | Thursday 19 March 2026 02:09:40 +0000 (0:00:01.282) 0:04:45.823 ******** 2026-03-19 02:09:47.572118 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:09:47.572127 | orchestrator | 2026-03-19 02:09:47.572135 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-19 02:09:47.572143 | orchestrator | Thursday 19 March 2026 02:09:41 +0000 (0:00:01.490) 0:04:47.313 ******** 2026-03-19 02:09:47.572168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:47.572195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:47.572205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:47.572215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:47.572238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:47.572294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 02:09:49.512662 | orchestrator | 2026-03-19 02:09:49.512750 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-19 02:09:49.512761 | orchestrator | Thursday 19 March 2026 02:09:47 +0000 (0:00:05.840) 0:04:53.154 ******** 2026-03-19 02:09:49.512772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512816 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:49.512838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512852 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:49.512874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 02:09:49.512893 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:09:49.512899 | orchestrator | 2026-03-19 02:09:49.512905 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-19 02:09:49.512911 | orchestrator | Thursday 19 March 2026 02:09:48 +0000 (0:00:01.048) 0:04:54.203 ******** 2026-03-19 02:09:49.512919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512951 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:09:49.512958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512983 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:09:49.512989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:09:49.512999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-19 02:10:38.828541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:10:38.828680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-19 02:10:38.828704 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.828718 | orchestrator | 2026-03-19 02:10:38.828769 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-19 02:10:38.828785 | orchestrator | Thursday 19 March 2026 02:09:49 +0000 (0:00:00.893) 0:04:55.096 ******** 2026-03-19 02:10:38.828798 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:10:38.828810 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:10:38.828821 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:10:38.828833 | orchestrator | 2026-03-19 02:10:38.828844 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-19 02:10:38.828856 | orchestrator | Thursday 19 March 2026 02:09:50 +0000 (0:00:01.449) 0:04:56.546 ******** 2026-03-19 02:10:38.828867 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:10:38.828879 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:10:38.828892 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:10:38.828904 | orchestrator | 2026-03-19 02:10:38.828916 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-19 02:10:38.828929 | orchestrator | Thursday 19 March 2026 02:09:53 +0000 (0:00:02.289) 0:04:58.836 ******** 2026-03-19 02:10:38.828942 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.828954 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.828966 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.828978 | orchestrator | 2026-03-19 02:10:38.828991 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-19 02:10:38.829004 | orchestrator | Thursday 19 March 2026 02:09:53 +0000 (0:00:00.624) 0:04:59.461 ******** 2026-03-19 02:10:38.829016 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.829027 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.829039 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.829051 | orchestrator | 2026-03-19 02:10:38.829063 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-19 02:10:38.829076 | orchestrator | Thursday 19 March 2026 02:09:54 +0000 (0:00:00.333) 0:04:59.795 ******** 2026-03-19 02:10:38.829088 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.829101 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.829113 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.829125 | orchestrator | 2026-03-19 02:10:38.829137 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-19 02:10:38.829149 | orchestrator | Thursday 19 March 2026 02:09:54 +0000 (0:00:00.302) 0:05:00.097 ******** 2026-03-19 02:10:38.829162 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.829174 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.829186 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.829198 | orchestrator | 2026-03-19 02:10:38.829211 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-19 02:10:38.829224 | orchestrator | Thursday 19 March 2026 02:09:54 +0000 (0:00:00.292) 0:05:00.390 ******** 2026-03-19 02:10:38.829237 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.829249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.829261 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.829274 | orchestrator | 2026-03-19 02:10:38.829317 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-19 02:10:38.829354 | orchestrator | Thursday 19 March 2026 02:09:55 +0000 (0:00:00.589) 0:05:00.980 ******** 2026-03-19 02:10:38.829369 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.829407 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.829421 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.829432 | orchestrator | 2026-03-19 02:10:38.829445 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-19 02:10:38.829458 | orchestrator | Thursday 19 March 2026 02:09:55 +0000 (0:00:00.518) 0:05:01.498 ******** 2026-03-19 02:10:38.829471 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829485 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829497 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829509 | orchestrator | 2026-03-19 02:10:38.829521 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-19 02:10:38.829548 | orchestrator | Thursday 19 March 2026 02:09:56 +0000 (0:00:00.688) 0:05:02.187 ******** 2026-03-19 02:10:38.829561 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829575 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829587 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829598 | orchestrator | 2026-03-19 02:10:38.829610 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-19 02:10:38.829621 | orchestrator | Thursday 19 March 2026 02:09:56 +0000 (0:00:00.336) 0:05:02.524 ******** 2026-03-19 02:10:38.829632 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829645 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829657 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829668 | orchestrator | 2026-03-19 02:10:38.829680 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-19 02:10:38.829693 | orchestrator | Thursday 19 March 2026 02:09:58 +0000 (0:00:01.245) 0:05:03.769 ******** 2026-03-19 02:10:38.829706 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829718 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829730 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829742 | orchestrator | 2026-03-19 02:10:38.829754 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-19 02:10:38.829766 | orchestrator | Thursday 19 March 2026 02:09:59 +0000 (0:00:00.934) 0:05:04.704 ******** 2026-03-19 02:10:38.829778 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829789 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829801 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829813 | orchestrator | 2026-03-19 02:10:38.829824 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-19 02:10:38.829866 | orchestrator | Thursday 19 March 2026 02:09:59 +0000 (0:00:00.885) 0:05:05.589 ******** 2026-03-19 02:10:38.829878 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:10:38.829886 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:10:38.829894 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:10:38.829901 | orchestrator | 2026-03-19 02:10:38.829908 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-19 02:10:38.829915 | orchestrator | Thursday 19 March 2026 02:10:09 +0000 (0:00:09.531) 0:05:15.121 ******** 2026-03-19 02:10:38.829923 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.829930 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.829937 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.829944 | orchestrator | 2026-03-19 02:10:38.829951 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-19 02:10:38.829959 | orchestrator | Thursday 19 March 2026 02:10:10 +0000 (0:00:01.242) 0:05:16.364 ******** 2026-03-19 02:10:38.829966 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:10:38.829973 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:10:38.829980 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:10:38.829988 | orchestrator | 2026-03-19 02:10:38.829995 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-19 02:10:38.830002 | orchestrator | Thursday 19 March 2026 02:10:25 +0000 (0:00:14.857) 0:05:31.221 ******** 2026-03-19 02:10:38.830010 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.830075 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.830083 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.830090 | orchestrator | 2026-03-19 02:10:38.830097 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-19 02:10:38.830104 | orchestrator | Thursday 19 March 2026 02:10:26 +0000 (0:00:00.752) 0:05:31.973 ******** 2026-03-19 02:10:38.830111 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:10:38.830119 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:10:38.830127 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:10:38.830134 | orchestrator | 2026-03-19 02:10:38.830141 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-19 02:10:38.830148 | orchestrator | Thursday 19 March 2026 02:10:30 +0000 (0:00:04.147) 0:05:36.121 ******** 2026-03-19 02:10:38.830170 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830177 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830192 | orchestrator | 2026-03-19 02:10:38.830199 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-19 02:10:38.830206 | orchestrator | Thursday 19 March 2026 02:10:31 +0000 (0:00:00.653) 0:05:36.775 ******** 2026-03-19 02:10:38.830214 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830221 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830228 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830235 | orchestrator | 2026-03-19 02:10:38.830243 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-19 02:10:38.830250 | orchestrator | Thursday 19 March 2026 02:10:31 +0000 (0:00:00.340) 0:05:37.115 ******** 2026-03-19 02:10:38.830257 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830264 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830271 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830279 | orchestrator | 2026-03-19 02:10:38.830429 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-19 02:10:38.830468 | orchestrator | Thursday 19 March 2026 02:10:31 +0000 (0:00:00.368) 0:05:37.484 ******** 2026-03-19 02:10:38.830486 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830495 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830502 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830509 | orchestrator | 2026-03-19 02:10:38.830516 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-19 02:10:38.830523 | orchestrator | Thursday 19 March 2026 02:10:32 +0000 (0:00:00.336) 0:05:37.820 ******** 2026-03-19 02:10:38.830536 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830563 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830580 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830593 | orchestrator | 2026-03-19 02:10:38.830605 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-19 02:10:38.830618 | orchestrator | Thursday 19 March 2026 02:10:32 +0000 (0:00:00.628) 0:05:38.449 ******** 2026-03-19 02:10:38.830630 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:38.830640 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:38.830652 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:38.830662 | orchestrator | 2026-03-19 02:10:38.830673 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-19 02:10:38.830685 | orchestrator | Thursday 19 March 2026 02:10:33 +0000 (0:00:00.347) 0:05:38.796 ******** 2026-03-19 02:10:38.830697 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.830710 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.830722 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.830733 | orchestrator | 2026-03-19 02:10:38.830743 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-19 02:10:38.830754 | orchestrator | Thursday 19 March 2026 02:10:37 +0000 (0:00:04.774) 0:05:43.570 ******** 2026-03-19 02:10:38.830765 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:38.830775 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:38.830787 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:38.830798 | orchestrator | 2026-03-19 02:10:38.830810 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:10:38.830824 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 02:10:38.830837 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 02:10:38.830850 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-19 02:10:38.830861 | orchestrator | 2026-03-19 02:10:38.830872 | orchestrator | 2026-03-19 02:10:38.830898 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:10:38.830929 | orchestrator | Thursday 19 March 2026 02:10:38 +0000 (0:00:00.839) 0:05:44.410 ******** 2026-03-19 02:10:39.541776 | orchestrator | =============================================================================== 2026-03-19 02:10:39.541900 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.86s 2026-03-19 02:10:39.541917 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.53s 2026-03-19 02:10:39.541928 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.84s 2026-03-19 02:10:39.541939 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.35s 2026-03-19 02:10:39.541950 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.77s 2026-03-19 02:10:39.541962 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.35s 2026-03-19 02:10:39.541971 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.17s 2026-03-19 02:10:39.541978 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.15s 2026-03-19 02:10:39.541984 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.11s 2026-03-19 02:10:39.541990 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.90s 2026-03-19 02:10:39.541997 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.90s 2026-03-19 02:10:39.542003 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.62s 2026-03-19 02:10:39.542009 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.53s 2026-03-19 02:10:39.542092 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.45s 2026-03-19 02:10:39.542100 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.38s 2026-03-19 02:10:39.542107 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.31s 2026-03-19 02:10:39.542113 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.30s 2026-03-19 02:10:39.542119 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.25s 2026-03-19 02:10:39.542125 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.20s 2026-03-19 02:10:39.542132 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.20s 2026-03-19 02:10:41.776995 | orchestrator | 2026-03-19 02:10:41 | INFO  | Task 62e77fbb-df55-485c-826d-72a56e28ad79 (opensearch) was prepared for execution. 2026-03-19 02:10:41.777110 | orchestrator | 2026-03-19 02:10:41 | INFO  | It takes a moment until task 62e77fbb-df55-485c-826d-72a56e28ad79 (opensearch) has been started and output is visible here. 2026-03-19 02:10:51.382750 | orchestrator | 2026-03-19 02:10:51.382860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:10:51.382871 | orchestrator | 2026-03-19 02:10:51.382878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:10:51.382885 | orchestrator | Thursday 19 March 2026 02:10:45 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-03-19 02:10:51.382891 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:10:51.382899 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:10:51.382906 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:10:51.382912 | orchestrator | 2026-03-19 02:10:51.382919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:10:51.382925 | orchestrator | Thursday 19 March 2026 02:10:45 +0000 (0:00:00.219) 0:00:00.405 ******** 2026-03-19 02:10:51.382952 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-19 02:10:51.382958 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-19 02:10:51.382962 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-19 02:10:51.382966 | orchestrator | 2026-03-19 02:10:51.382969 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-19 02:10:51.383016 | orchestrator | 2026-03-19 02:10:51.383020 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 02:10:51.383024 | orchestrator | Thursday 19 March 2026 02:10:46 +0000 (0:00:00.296) 0:00:00.701 ******** 2026-03-19 02:10:51.383028 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:10:51.383032 | orchestrator | 2026-03-19 02:10:51.383036 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-19 02:10:51.383040 | orchestrator | Thursday 19 March 2026 02:10:46 +0000 (0:00:00.354) 0:00:01.056 ******** 2026-03-19 02:10:51.383044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 02:10:51.383054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 02:10:51.383058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 02:10:51.383062 | orchestrator | 2026-03-19 02:10:51.383066 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-19 02:10:51.383070 | orchestrator | Thursday 19 March 2026 02:10:47 +0000 (0:00:00.617) 0:00:01.673 ******** 2026-03-19 02:10:51.383077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:51.383085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:51.383102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:51.383111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:51.383121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:51.383126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:51.383131 | orchestrator | 2026-03-19 02:10:51.383134 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 02:10:51.383138 | orchestrator | Thursday 19 March 2026 02:10:48 +0000 (0:00:01.426) 0:00:03.100 ******** 2026-03-19 02:10:51.383142 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:10:51.383146 | orchestrator | 2026-03-19 02:10:51.383150 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-19 02:10:51.383154 | orchestrator | Thursday 19 March 2026 02:10:49 +0000 (0:00:00.384) 0:00:03.484 ******** 2026-03-19 02:10:51.383164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:52.047862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:52.047958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:10:52.047970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:52.047978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:52.048058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:10:52.048068 | orchestrator | 2026-03-19 02:10:52.048077 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-19 02:10:52.048085 | orchestrator | Thursday 19 March 2026 02:10:51 +0000 (0:00:02.342) 0:00:05.827 ******** 2026-03-19 02:10:52.048093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.048100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.048106 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:52.048114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.048138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.887540 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:52.887645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.887664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.887678 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:52.887688 | orchestrator | 2026-03-19 02:10:52.887699 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-19 02:10:52.887708 | orchestrator | Thursday 19 March 2026 02:10:52 +0000 (0:00:00.664) 0:00:06.492 ******** 2026-03-19 02:10:52.887738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.887783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.887805 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:10:52.887811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.887817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.887823 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:10:52.887835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-19 02:10:52.887845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-19 02:10:52.887851 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:10:52.887856 | orchestrator | 2026-03-19 02:10:52.887862 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-19 02:10:52.887872 | orchestrator | Thursday 19 March 2026 02:10:52 +0000 (0:00:00.832) 0:00:07.325 ******** 2026-03-19 02:11:00.551788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:11:00.551912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:11:00.551932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:11:00.551993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:11:00.552043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:11:00.552079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:11:00.552115 | orchestrator | 2026-03-19 02:11:00.552136 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-19 02:11:00.552153 | orchestrator | Thursday 19 March 2026 02:10:54 +0000 (0:00:02.121) 0:00:09.447 ******** 2026-03-19 02:11:00.552171 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:11:00.552189 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:11:00.552207 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:11:00.552225 | orchestrator | 2026-03-19 02:11:00.552244 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-19 02:11:00.552263 | orchestrator | Thursday 19 March 2026 02:10:57 +0000 (0:00:02.124) 0:00:11.571 ******** 2026-03-19 02:11:00.552281 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:11:00.552299 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:11:00.552351 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:11:00.552370 | orchestrator | 2026-03-19 02:11:00.552390 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-19 02:11:00.552407 | orchestrator | Thursday 19 March 2026 02:10:58 +0000 (0:00:01.739) 0:00:13.311 ******** 2026-03-19 02:11:00.552428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:11:00.552460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:11:00.552496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-19 02:13:50.421791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:13:50.421902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:13:50.421928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-19 02:13:50.421936 | orchestrator | 2026-03-19 02:13:50.421944 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 02:13:50.421952 | orchestrator | Thursday 19 March 2026 02:11:00 +0000 (0:00:01.682) 0:00:14.993 ******** 2026-03-19 02:13:50.421958 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:13:50.421965 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:13:50.421972 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:13:50.421978 | orchestrator | 2026-03-19 02:13:50.421984 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 02:13:50.421991 | orchestrator | Thursday 19 March 2026 02:11:00 +0000 (0:00:00.265) 0:00:15.259 ******** 2026-03-19 02:13:50.421997 | orchestrator | 2026-03-19 02:13:50.422003 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 02:13:50.422009 | orchestrator | Thursday 19 March 2026 02:11:00 +0000 (0:00:00.064) 0:00:15.323 ******** 2026-03-19 02:13:50.422057 | orchestrator | 2026-03-19 02:13:50.422065 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 02:13:50.422078 | orchestrator | Thursday 19 March 2026 02:11:00 +0000 (0:00:00.064) 0:00:15.388 ******** 2026-03-19 02:13:50.422084 | orchestrator | 2026-03-19 02:13:50.422090 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-19 02:13:50.422111 | orchestrator | Thursday 19 March 2026 02:11:00 +0000 (0:00:00.062) 0:00:15.450 ******** 2026-03-19 02:13:50.422118 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:13:50.422124 | orchestrator | 2026-03-19 02:13:50.422129 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-19 02:13:50.422135 | orchestrator | Thursday 19 March 2026 02:11:01 +0000 (0:00:00.221) 0:00:15.672 ******** 2026-03-19 02:13:50.422142 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:13:50.422146 | orchestrator | 2026-03-19 02:13:50.422149 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-19 02:13:50.422153 | orchestrator | Thursday 19 March 2026 02:11:01 +0000 (0:00:00.630) 0:00:16.302 ******** 2026-03-19 02:13:50.422157 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:13:50.422161 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:13:50.422164 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:13:50.422168 | orchestrator | 2026-03-19 02:13:50.422172 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-19 02:13:50.422176 | orchestrator | Thursday 19 March 2026 02:12:07 +0000 (0:01:05.603) 0:01:21.906 ******** 2026-03-19 02:13:50.422179 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:13:50.422183 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:13:50.422187 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:13:50.422190 | orchestrator | 2026-03-19 02:13:50.422194 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 02:13:50.422198 | orchestrator | Thursday 19 March 2026 02:13:38 +0000 (0:01:31.449) 0:02:53.355 ******** 2026-03-19 02:13:50.422203 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:13:50.422207 | orchestrator | 2026-03-19 02:13:50.422211 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-19 02:13:50.422214 | orchestrator | Thursday 19 March 2026 02:13:39 +0000 (0:00:00.537) 0:02:53.894 ******** 2026-03-19 02:13:50.422218 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:13:50.422222 | orchestrator | 2026-03-19 02:13:50.422226 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-19 02:13:50.422229 | orchestrator | Thursday 19 March 2026 02:13:42 +0000 (0:00:02.800) 0:02:56.694 ******** 2026-03-19 02:13:50.422233 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:13:50.422237 | orchestrator | 2026-03-19 02:13:50.422240 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-19 02:13:50.422244 | orchestrator | Thursday 19 March 2026 02:13:44 +0000 (0:00:02.450) 0:02:59.145 ******** 2026-03-19 02:13:50.422248 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:13:50.422252 | orchestrator | 2026-03-19 02:13:50.422256 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-19 02:13:50.422259 | orchestrator | Thursday 19 March 2026 02:13:47 +0000 (0:00:02.859) 0:03:02.004 ******** 2026-03-19 02:13:50.422263 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:13:50.422267 | orchestrator | 2026-03-19 02:13:50.422271 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:13:50.422275 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:13:50.422281 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 02:13:50.422289 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 02:13:50.422293 | orchestrator | 2026-03-19 02:13:50.422297 | orchestrator | 2026-03-19 02:13:50.422304 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:13:50.422308 | orchestrator | Thursday 19 March 2026 02:13:50 +0000 (0:00:02.834) 0:03:04.839 ******** 2026-03-19 02:13:50.422312 | orchestrator | =============================================================================== 2026-03-19 02:13:50.422315 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 91.45s 2026-03-19 02:13:50.422319 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.60s 2026-03-19 02:13:50.422323 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.86s 2026-03-19 02:13:50.422327 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.83s 2026-03-19 02:13:50.422330 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.80s 2026-03-19 02:13:50.422334 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.45s 2026-03-19 02:13:50.422338 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.34s 2026-03-19 02:13:50.422342 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.12s 2026-03-19 02:13:50.422345 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.12s 2026-03-19 02:13:50.422349 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.74s 2026-03-19 02:13:50.422353 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.68s 2026-03-19 02:13:50.422357 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.43s 2026-03-19 02:13:50.422362 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.83s 2026-03-19 02:13:50.422366 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.66s 2026-03-19 02:13:50.422370 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.63s 2026-03-19 02:13:50.422375 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.62s 2026-03-19 02:13:50.422381 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-19 02:13:50.754514 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.38s 2026-03-19 02:13:50.754608 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.35s 2026-03-19 02:13:50.754619 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2026-03-19 02:13:53.238948 | orchestrator | 2026-03-19 02:13:53 | INFO  | Task 9a03af3f-843c-4640-b556-fbf751c8e847 (memcached) was prepared for execution. 2026-03-19 02:13:53.239061 | orchestrator | 2026-03-19 02:13:53 | INFO  | It takes a moment until task 9a03af3f-843c-4640-b556-fbf751c8e847 (memcached) has been started and output is visible here. 2026-03-19 02:14:04.964155 | orchestrator | 2026-03-19 02:14:04.964260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:14:04.964273 | orchestrator | 2026-03-19 02:14:04.964281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:14:04.964290 | orchestrator | Thursday 19 March 2026 02:13:57 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-19 02:14:04.964298 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:14:04.964307 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:14:04.964314 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:14:04.964322 | orchestrator | 2026-03-19 02:14:04.964330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:14:04.964337 | orchestrator | Thursday 19 March 2026 02:13:57 +0000 (0:00:00.289) 0:00:00.549 ******** 2026-03-19 02:14:04.964345 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-19 02:14:04.964353 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-19 02:14:04.964361 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-19 02:14:04.964368 | orchestrator | 2026-03-19 02:14:04.964375 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-19 02:14:04.964409 | orchestrator | 2026-03-19 02:14:04.964417 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-19 02:14:04.964425 | orchestrator | Thursday 19 March 2026 02:13:58 +0000 (0:00:00.432) 0:00:00.982 ******** 2026-03-19 02:14:04.964432 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:14:04.964441 | orchestrator | 2026-03-19 02:14:04.964448 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-19 02:14:04.964455 | orchestrator | Thursday 19 March 2026 02:13:58 +0000 (0:00:00.479) 0:00:01.461 ******** 2026-03-19 02:14:04.964463 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-19 02:14:04.964514 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-19 02:14:04.964522 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-19 02:14:04.964529 | orchestrator | 2026-03-19 02:14:04.964536 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-19 02:14:04.964543 | orchestrator | Thursday 19 March 2026 02:13:59 +0000 (0:00:00.678) 0:00:02.140 ******** 2026-03-19 02:14:04.964550 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-19 02:14:04.964558 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-19 02:14:04.964565 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-19 02:14:04.964572 | orchestrator | 2026-03-19 02:14:04.964579 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-19 02:14:04.964587 | orchestrator | Thursday 19 March 2026 02:14:00 +0000 (0:00:01.646) 0:00:03.787 ******** 2026-03-19 02:14:04.964618 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:14:04.964626 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:04.964633 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:14:04.964641 | orchestrator | 2026-03-19 02:14:04.964648 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-19 02:14:04.964655 | orchestrator | Thursday 19 March 2026 02:14:02 +0000 (0:00:01.467) 0:00:05.254 ******** 2026-03-19 02:14:04.964662 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:04.964670 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:14:04.964677 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:14:04.964684 | orchestrator | 2026-03-19 02:14:04.964691 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:14:04.964699 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:04.964707 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:04.964714 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:04.964722 | orchestrator | 2026-03-19 02:14:04.964729 | orchestrator | 2026-03-19 02:14:04.964736 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:14:04.964743 | orchestrator | Thursday 19 March 2026 02:14:04 +0000 (0:00:02.127) 0:00:07.382 ******** 2026-03-19 02:14:04.964750 | orchestrator | =============================================================================== 2026-03-19 02:14:04.964758 | orchestrator | memcached : Restart memcached container --------------------------------- 2.13s 2026-03-19 02:14:04.964765 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.65s 2026-03-19 02:14:04.964772 | orchestrator | memcached : Check memcached container ----------------------------------- 1.47s 2026-03-19 02:14:04.964780 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-03-19 02:14:04.964787 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.48s 2026-03-19 02:14:04.964794 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-19 02:14:04.964801 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-19 02:14:07.331680 | orchestrator | 2026-03-19 02:14:07 | INFO  | Task 628261ac-1cc6-46fb-ad36-7e523711bb14 (redis) was prepared for execution. 2026-03-19 02:14:07.331780 | orchestrator | 2026-03-19 02:14:07 | INFO  | It takes a moment until task 628261ac-1cc6-46fb-ad36-7e523711bb14 (redis) has been started and output is visible here. 2026-03-19 02:14:16.220035 | orchestrator | 2026-03-19 02:14:16.220175 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:14:16.220195 | orchestrator | 2026-03-19 02:14:16.220208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:14:16.220219 | orchestrator | Thursday 19 March 2026 02:14:11 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-03-19 02:14:16.220231 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:14:16.220243 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:14:16.220254 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:14:16.220265 | orchestrator | 2026-03-19 02:14:16.220277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:14:16.220288 | orchestrator | Thursday 19 March 2026 02:14:11 +0000 (0:00:00.285) 0:00:00.532 ******** 2026-03-19 02:14:16.220299 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-19 02:14:16.220310 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-19 02:14:16.220321 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-19 02:14:16.220332 | orchestrator | 2026-03-19 02:14:16.220343 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-19 02:14:16.220354 | orchestrator | 2026-03-19 02:14:16.220365 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-19 02:14:16.220376 | orchestrator | Thursday 19 March 2026 02:14:12 +0000 (0:00:00.409) 0:00:00.942 ******** 2026-03-19 02:14:16.220387 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:14:16.220399 | orchestrator | 2026-03-19 02:14:16.220413 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-19 02:14:16.220432 | orchestrator | Thursday 19 March 2026 02:14:12 +0000 (0:00:00.453) 0:00:01.395 ******** 2026-03-19 02:14:16.220455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220656 | orchestrator | 2026-03-19 02:14:16.220668 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-19 02:14:16.220679 | orchestrator | Thursday 19 March 2026 02:14:13 +0000 (0:00:01.118) 0:00:02.514 ******** 2026-03-19 02:14:16.220691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:16.220950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291455 | orchestrator | 2026-03-19 02:14:20.291462 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-19 02:14:20.291467 | orchestrator | Thursday 19 March 2026 02:14:16 +0000 (0:00:02.504) 0:00:05.018 ******** 2026-03-19 02:14:20.291473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291592 | orchestrator | 2026-03-19 02:14:20.291599 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-19 02:14:20.291606 | orchestrator | Thursday 19 March 2026 02:14:18 +0000 (0:00:02.403) 0:00:07.422 ******** 2026-03-19 02:14:20.291612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:20.291661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 02:14:35.688120 | orchestrator | 2026-03-19 02:14:35.688276 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 02:14:35.688303 | orchestrator | Thursday 19 March 2026 02:14:20 +0000 (0:00:01.475) 0:00:08.897 ******** 2026-03-19 02:14:35.688324 | orchestrator | 2026-03-19 02:14:35.688342 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 02:14:35.688360 | orchestrator | Thursday 19 March 2026 02:14:20 +0000 (0:00:00.062) 0:00:08.960 ******** 2026-03-19 02:14:35.688379 | orchestrator | 2026-03-19 02:14:35.688399 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 02:14:35.688419 | orchestrator | Thursday 19 March 2026 02:14:20 +0000 (0:00:00.064) 0:00:09.024 ******** 2026-03-19 02:14:35.688436 | orchestrator | 2026-03-19 02:14:35.688454 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-19 02:14:35.688473 | orchestrator | Thursday 19 March 2026 02:14:20 +0000 (0:00:00.064) 0:00:09.089 ******** 2026-03-19 02:14:35.688551 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:35.688574 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:14:35.688594 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:14:35.688612 | orchestrator | 2026-03-19 02:14:35.688629 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-19 02:14:35.688646 | orchestrator | Thursday 19 March 2026 02:14:28 +0000 (0:00:08.047) 0:00:17.136 ******** 2026-03-19 02:14:35.688702 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:14:35.688722 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:14:35.688740 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:35.688760 | orchestrator | 2026-03-19 02:14:35.688781 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:14:35.688803 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:35.688825 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:35.688868 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:14:35.688890 | orchestrator | 2026-03-19 02:14:35.688910 | orchestrator | 2026-03-19 02:14:35.688928 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:14:35.688947 | orchestrator | Thursday 19 March 2026 02:14:35 +0000 (0:00:07.044) 0:00:24.180 ******** 2026-03-19 02:14:35.688965 | orchestrator | =============================================================================== 2026-03-19 02:14:35.688983 | orchestrator | redis : Restart redis container ----------------------------------------- 8.05s 2026-03-19 02:14:35.689002 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.04s 2026-03-19 02:14:35.689020 | orchestrator | redis : Copying over default config.json files -------------------------- 2.50s 2026-03-19 02:14:35.689037 | orchestrator | redis : Copying over redis config files --------------------------------- 2.40s 2026-03-19 02:14:35.689055 | orchestrator | redis : Check redis containers ------------------------------------------ 1.48s 2026-03-19 02:14:35.689073 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.12s 2026-03-19 02:14:35.689091 | orchestrator | redis : include_tasks --------------------------------------------------- 0.45s 2026-03-19 02:14:35.689108 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-19 02:14:35.689125 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-19 02:14:35.689142 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-03-19 02:14:37.982706 | orchestrator | 2026-03-19 02:14:37 | INFO  | Task 97d32d10-33a7-4e44-9049-54ad0ec22da2 (mariadb) was prepared for execution. 2026-03-19 02:14:37.982816 | orchestrator | 2026-03-19 02:14:37 | INFO  | It takes a moment until task 97d32d10-33a7-4e44-9049-54ad0ec22da2 (mariadb) has been started and output is visible here. 2026-03-19 02:14:50.526888 | orchestrator | 2026-03-19 02:14:50.526986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:14:50.526996 | orchestrator | 2026-03-19 02:14:50.527003 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:14:50.527011 | orchestrator | Thursday 19 March 2026 02:14:42 +0000 (0:00:00.162) 0:00:00.162 ******** 2026-03-19 02:14:50.527019 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:14:50.527027 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:14:50.527034 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:14:50.527041 | orchestrator | 2026-03-19 02:14:50.527048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:14:50.527055 | orchestrator | Thursday 19 March 2026 02:14:42 +0000 (0:00:00.255) 0:00:00.417 ******** 2026-03-19 02:14:50.527062 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-19 02:14:50.527069 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-19 02:14:50.527075 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-19 02:14:50.527082 | orchestrator | 2026-03-19 02:14:50.527088 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-19 02:14:50.527095 | orchestrator | 2026-03-19 02:14:50.527101 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-19 02:14:50.527134 | orchestrator | Thursday 19 March 2026 02:14:42 +0000 (0:00:00.442) 0:00:00.859 ******** 2026-03-19 02:14:50.527141 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 02:14:50.527147 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 02:14:50.527154 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 02:14:50.527161 | orchestrator | 2026-03-19 02:14:50.527167 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 02:14:50.527174 | orchestrator | Thursday 19 March 2026 02:14:43 +0000 (0:00:00.330) 0:00:01.190 ******** 2026-03-19 02:14:50.527181 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:14:50.527188 | orchestrator | 2026-03-19 02:14:50.527194 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-19 02:14:50.527201 | orchestrator | Thursday 19 March 2026 02:14:43 +0000 (0:00:00.497) 0:00:01.688 ******** 2026-03-19 02:14:50.527228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:50.527254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:50.527271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:50.527278 | orchestrator | 2026-03-19 02:14:50.527285 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-19 02:14:50.527292 | orchestrator | Thursday 19 March 2026 02:14:45 +0000 (0:00:02.278) 0:00:03.966 ******** 2026-03-19 02:14:50.527298 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:14:50.527306 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:14:50.527312 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:50.527319 | orchestrator | 2026-03-19 02:14:50.527325 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-19 02:14:50.527332 | orchestrator | Thursday 19 March 2026 02:14:46 +0000 (0:00:00.568) 0:00:04.535 ******** 2026-03-19 02:14:50.527338 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:14:50.527345 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:14:50.527351 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:50.527358 | orchestrator | 2026-03-19 02:14:50.527364 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-19 02:14:50.527371 | orchestrator | Thursday 19 March 2026 02:14:47 +0000 (0:00:01.360) 0:00:05.895 ******** 2026-03-19 02:14:50.527383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:57.976843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:57.976967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:14:57.977020 | orchestrator | 2026-03-19 02:14:57.977039 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-19 02:14:57.977057 | orchestrator | Thursday 19 March 2026 02:14:50 +0000 (0:00:02.727) 0:00:08.623 ******** 2026-03-19 02:14:57.977071 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:14:57.977087 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:14:57.977102 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:57.977115 | orchestrator | 2026-03-19 02:14:57.977130 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-19 02:14:57.977165 | orchestrator | Thursday 19 March 2026 02:14:51 +0000 (0:00:01.112) 0:00:09.736 ******** 2026-03-19 02:14:57.977180 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:14:57.977194 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:14:57.977209 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:14:57.977223 | orchestrator | 2026-03-19 02:14:57.977238 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 02:14:57.977252 | orchestrator | Thursday 19 March 2026 02:14:55 +0000 (0:00:03.621) 0:00:13.358 ******** 2026-03-19 02:14:57.977267 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:14:57.977282 | orchestrator | 2026-03-19 02:14:57.977296 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 02:14:57.977394 | orchestrator | Thursday 19 March 2026 02:14:55 +0000 (0:00:00.490) 0:00:13.848 ******** 2026-03-19 02:14:57.977431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:14:57.977460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:14:57.977488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:02.695740 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:15:02.695863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:02.695898 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:15:02.695905 | orchestrator | 2026-03-19 02:15:02.695912 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 02:15:02.695919 | orchestrator | Thursday 19 March 2026 02:14:57 +0000 (0:00:02.224) 0:00:16.073 ******** 2026-03-19 02:15:02.695929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:02.695941 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:15:02.695978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:02.696000 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:15:02.696011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:02.696021 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:15:02.696031 | orchestrator | 2026-03-19 02:15:02.696042 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 02:15:02.696051 | orchestrator | Thursday 19 March 2026 02:15:00 +0000 (0:00:02.434) 0:00:18.507 ******** 2026-03-19 02:15:02.696073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:05.529448 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:15:05.529700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:05.529728 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:15:05.529761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 02:15:05.529800 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:15:05.529813 | orchestrator | 2026-03-19 02:15:05.529826 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-19 02:15:05.529839 | orchestrator | Thursday 19 March 2026 02:15:02 +0000 (0:00:02.286) 0:00:20.794 ******** 2026-03-19 02:15:05.529872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:15:05.529887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:15:05.529915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 02:17:16.123990 | orchestrator | 2026-03-19 02:17:16.124079 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-19 02:17:16.124086 | orchestrator | Thursday 19 March 2026 02:15:05 +0000 (0:00:02.833) 0:00:23.628 ******** 2026-03-19 02:17:16.124091 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:16.124096 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:17:16.124100 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:17:16.124105 | orchestrator | 2026-03-19 02:17:16.124109 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-19 02:17:16.124113 | orchestrator | Thursday 19 March 2026 02:15:06 +0000 (0:00:00.803) 0:00:24.431 ******** 2026-03-19 02:17:16.124117 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124122 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124125 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124129 | orchestrator | 2026-03-19 02:17:16.124133 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-19 02:17:16.124137 | orchestrator | Thursday 19 March 2026 02:15:06 +0000 (0:00:00.486) 0:00:24.918 ******** 2026-03-19 02:17:16.124141 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124144 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124148 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124152 | orchestrator | 2026-03-19 02:17:16.124156 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-19 02:17:16.124159 | orchestrator | Thursday 19 March 2026 02:15:07 +0000 (0:00:00.303) 0:00:25.221 ******** 2026-03-19 02:17:16.124164 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-19 02:17:16.124169 | orchestrator | ...ignoring 2026-03-19 02:17:16.124173 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-19 02:17:16.124177 | orchestrator | ...ignoring 2026-03-19 02:17:16.124181 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-19 02:17:16.124185 | orchestrator | ...ignoring 2026-03-19 02:17:16.124209 | orchestrator | 2026-03-19 02:17:16.124213 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-19 02:17:16.124217 | orchestrator | Thursday 19 March 2026 02:15:17 +0000 (0:00:10.834) 0:00:36.055 ******** 2026-03-19 02:17:16.124220 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124224 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124228 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124231 | orchestrator | 2026-03-19 02:17:16.124235 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-19 02:17:16.124239 | orchestrator | Thursday 19 March 2026 02:15:18 +0000 (0:00:00.462) 0:00:36.518 ******** 2026-03-19 02:17:16.124243 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124246 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124250 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124254 | orchestrator | 2026-03-19 02:17:16.124258 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-19 02:17:16.124261 | orchestrator | Thursday 19 March 2026 02:15:19 +0000 (0:00:00.622) 0:00:37.140 ******** 2026-03-19 02:17:16.124265 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124269 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124272 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124325 | orchestrator | 2026-03-19 02:17:16.124340 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-19 02:17:16.124344 | orchestrator | Thursday 19 March 2026 02:15:19 +0000 (0:00:00.416) 0:00:37.557 ******** 2026-03-19 02:17:16.124348 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124352 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124356 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124359 | orchestrator | 2026-03-19 02:17:16.124363 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-19 02:17:16.124367 | orchestrator | Thursday 19 March 2026 02:15:19 +0000 (0:00:00.430) 0:00:37.987 ******** 2026-03-19 02:17:16.124371 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124374 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124378 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124382 | orchestrator | 2026-03-19 02:17:16.124386 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-19 02:17:16.124390 | orchestrator | Thursday 19 March 2026 02:15:20 +0000 (0:00:00.486) 0:00:38.474 ******** 2026-03-19 02:17:16.124394 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124398 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124401 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124405 | orchestrator | 2026-03-19 02:17:16.124409 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 02:17:16.124413 | orchestrator | Thursday 19 March 2026 02:15:21 +0000 (0:00:00.832) 0:00:39.306 ******** 2026-03-19 02:17:16.124416 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124420 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124424 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-19 02:17:16.124428 | orchestrator | 2026-03-19 02:17:16.124431 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-19 02:17:16.124435 | orchestrator | Thursday 19 March 2026 02:15:21 +0000 (0:00:00.407) 0:00:39.714 ******** 2026-03-19 02:17:16.124439 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:16.124443 | orchestrator | 2026-03-19 02:17:16.124446 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-19 02:17:16.124450 | orchestrator | Thursday 19 March 2026 02:15:31 +0000 (0:00:10.186) 0:00:49.901 ******** 2026-03-19 02:17:16.124454 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124458 | orchestrator | 2026-03-19 02:17:16.124461 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 02:17:16.124466 | orchestrator | Thursday 19 March 2026 02:15:31 +0000 (0:00:00.131) 0:00:50.032 ******** 2026-03-19 02:17:16.124469 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124488 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124492 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124496 | orchestrator | 2026-03-19 02:17:16.124500 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-19 02:17:16.124503 | orchestrator | Thursday 19 March 2026 02:15:32 +0000 (0:00:00.959) 0:00:50.992 ******** 2026-03-19 02:17:16.124507 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:16.124511 | orchestrator | 2026-03-19 02:17:16.124515 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-19 02:17:16.124518 | orchestrator | Thursday 19 March 2026 02:15:40 +0000 (0:00:07.501) 0:00:58.494 ******** 2026-03-19 02:17:16.124522 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124526 | orchestrator | 2026-03-19 02:17:16.124530 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-19 02:17:16.124533 | orchestrator | Thursday 19 March 2026 02:15:42 +0000 (0:00:01.620) 0:01:00.115 ******** 2026-03-19 02:17:16.124537 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124541 | orchestrator | 2026-03-19 02:17:16.124545 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-19 02:17:16.124549 | orchestrator | Thursday 19 March 2026 02:15:44 +0000 (0:00:02.451) 0:01:02.567 ******** 2026-03-19 02:17:16.124553 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:16.124558 | orchestrator | 2026-03-19 02:17:16.124562 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-19 02:17:16.124566 | orchestrator | Thursday 19 March 2026 02:15:44 +0000 (0:00:00.115) 0:01:02.682 ******** 2026-03-19 02:17:16.124570 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124597 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:16.124602 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:16.124607 | orchestrator | 2026-03-19 02:17:16.124611 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-19 02:17:16.124615 | orchestrator | Thursday 19 March 2026 02:15:44 +0000 (0:00:00.310) 0:01:02.993 ******** 2026-03-19 02:17:16.124620 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:16.124624 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-19 02:17:16.124628 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:17:16.124633 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:17:16.124637 | orchestrator | 2026-03-19 02:17:16.124641 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 02:17:16.124645 | orchestrator | skipping: no hosts matched 2026-03-19 02:17:16.124650 | orchestrator | 2026-03-19 02:17:16.124654 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 02:17:16.124659 | orchestrator | 2026-03-19 02:17:16.124663 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 02:17:16.124667 | orchestrator | Thursday 19 March 2026 02:15:45 +0000 (0:00:00.486) 0:01:03.480 ******** 2026-03-19 02:17:16.124671 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:17:16.124676 | orchestrator | 2026-03-19 02:17:16.124680 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 02:17:16.124684 | orchestrator | Thursday 19 March 2026 02:16:02 +0000 (0:00:16.796) 0:01:20.276 ******** 2026-03-19 02:17:16.124689 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124693 | orchestrator | 2026-03-19 02:17:16.124697 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 02:17:16.124701 | orchestrator | Thursday 19 March 2026 02:16:18 +0000 (0:00:16.577) 0:01:36.854 ******** 2026-03-19 02:17:16.124705 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:16.124710 | orchestrator | 2026-03-19 02:17:16.124716 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 02:17:16.124721 | orchestrator | 2026-03-19 02:17:16.124728 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 02:17:16.124732 | orchestrator | Thursday 19 March 2026 02:16:21 +0000 (0:00:02.306) 0:01:39.160 ******** 2026-03-19 02:17:16.124741 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:17:16.124745 | orchestrator | 2026-03-19 02:17:16.124749 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 02:17:16.124754 | orchestrator | Thursday 19 March 2026 02:16:36 +0000 (0:00:15.305) 0:01:54.466 ******** 2026-03-19 02:17:16.124758 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124762 | orchestrator | 2026-03-19 02:17:16.124767 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 02:17:16.124771 | orchestrator | Thursday 19 March 2026 02:16:52 +0000 (0:00:16.557) 0:02:11.023 ******** 2026-03-19 02:17:16.124775 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:16.124780 | orchestrator | 2026-03-19 02:17:16.124784 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-19 02:17:16.124788 | orchestrator | 2026-03-19 02:17:16.124793 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 02:17:16.124797 | orchestrator | Thursday 19 March 2026 02:16:55 +0000 (0:00:02.408) 0:02:13.432 ******** 2026-03-19 02:17:16.124801 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:16.124805 | orchestrator | 2026-03-19 02:17:16.124810 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 02:17:16.124814 | orchestrator | Thursday 19 March 2026 02:17:12 +0000 (0:00:17.013) 0:02:30.445 ******** 2026-03-19 02:17:16.124818 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124822 | orchestrator | 2026-03-19 02:17:16.124827 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 02:17:16.124831 | orchestrator | Thursday 19 March 2026 02:17:12 +0000 (0:00:00.592) 0:02:31.038 ******** 2026-03-19 02:17:16.124835 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:16.124839 | orchestrator | 2026-03-19 02:17:16.124843 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-19 02:17:16.124847 | orchestrator | 2026-03-19 02:17:16.124852 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-19 02:17:16.124856 | orchestrator | Thursday 19 March 2026 02:17:15 +0000 (0:00:02.658) 0:02:33.697 ******** 2026-03-19 02:17:16.124860 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:17:16.124865 | orchestrator | 2026-03-19 02:17:16.124869 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-19 02:17:16.124876 | orchestrator | Thursday 19 March 2026 02:17:16 +0000 (0:00:00.517) 0:02:34.215 ******** 2026-03-19 02:17:29.150368 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:29.150495 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:29.150511 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:29.150523 | orchestrator | 2026-03-19 02:17:29.150536 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-19 02:17:29.150548 | orchestrator | Thursday 19 March 2026 02:17:18 +0000 (0:00:02.491) 0:02:36.706 ******** 2026-03-19 02:17:29.150559 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:29.150571 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:29.150581 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:29.150592 | orchestrator | 2026-03-19 02:17:29.150603 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-19 02:17:29.150614 | orchestrator | Thursday 19 March 2026 02:17:20 +0000 (0:00:02.317) 0:02:39.024 ******** 2026-03-19 02:17:29.150625 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:29.150636 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:29.150646 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:29.150657 | orchestrator | 2026-03-19 02:17:29.150668 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-19 02:17:29.150679 | orchestrator | Thursday 19 March 2026 02:17:23 +0000 (0:00:02.462) 0:02:41.487 ******** 2026-03-19 02:17:29.150690 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:29.150701 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:29.150712 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:17:29.150722 | orchestrator | 2026-03-19 02:17:29.150763 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-19 02:17:29.150775 | orchestrator | Thursday 19 March 2026 02:17:25 +0000 (0:00:02.258) 0:02:43.746 ******** 2026-03-19 02:17:29.150785 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:29.150797 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:29.150808 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:29.150818 | orchestrator | 2026-03-19 02:17:29.150829 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-19 02:17:29.150840 | orchestrator | Thursday 19 March 2026 02:17:28 +0000 (0:00:02.812) 0:02:46.559 ******** 2026-03-19 02:17:29.150851 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:29.150864 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:17:29.150876 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:17:29.150888 | orchestrator | 2026-03-19 02:17:29.150901 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:17:29.150914 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-19 02:17:29.150928 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-19 02:17:29.150957 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-19 02:17:29.150981 | orchestrator | 2026-03-19 02:17:29.150993 | orchestrator | 2026-03-19 02:17:29.151006 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:17:29.151018 | orchestrator | Thursday 19 March 2026 02:17:28 +0000 (0:00:00.389) 0:02:46.948 ******** 2026-03-19 02:17:29.151031 | orchestrator | =============================================================================== 2026-03-19 02:17:29.151060 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.13s 2026-03-19 02:17:29.151073 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.10s 2026-03-19 02:17:29.151086 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.01s 2026-03-19 02:17:29.151105 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-03-19 02:17:29.151123 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.19s 2026-03-19 02:17:29.151142 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.50s 2026-03-19 02:17:29.151162 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.72s 2026-03-19 02:17:29.151183 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.62s 2026-03-19 02:17:29.151201 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.83s 2026-03-19 02:17:29.151222 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.81s 2026-03-19 02:17:29.151244 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.73s 2026-03-19 02:17:29.151297 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.66s 2026-03-19 02:17:29.151316 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.49s 2026-03-19 02:17:29.151334 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.46s 2026-03-19 02:17:29.151351 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2026-03-19 02:17:29.151367 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.43s 2026-03-19 02:17:29.151386 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.32s 2026-03-19 02:17:29.151406 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.29s 2026-03-19 02:17:29.151426 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.28s 2026-03-19 02:17:29.151446 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.26s 2026-03-19 02:17:31.388830 | orchestrator | 2026-03-19 02:17:31 | INFO  | Task e4c2c56c-f596-48cb-b4c3-4e9b8de43c3b (rabbitmq) was prepared for execution. 2026-03-19 02:17:31.388909 | orchestrator | 2026-03-19 02:17:31 | INFO  | It takes a moment until task e4c2c56c-f596-48cb-b4c3-4e9b8de43c3b (rabbitmq) has been started and output is visible here. 2026-03-19 02:17:43.408113 | orchestrator | 2026-03-19 02:17:43.408248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:17:43.408268 | orchestrator | 2026-03-19 02:17:43.408277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:17:43.408287 | orchestrator | Thursday 19 March 2026 02:17:34 +0000 (0:00:00.148) 0:00:00.148 ******** 2026-03-19 02:17:43.408295 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:43.408304 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:17:43.408312 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:17:43.408320 | orchestrator | 2026-03-19 02:17:43.408328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:17:43.408337 | orchestrator | Thursday 19 March 2026 02:17:35 +0000 (0:00:00.295) 0:00:00.444 ******** 2026-03-19 02:17:43.408345 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-19 02:17:43.408353 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-19 02:17:43.408361 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-19 02:17:43.408369 | orchestrator | 2026-03-19 02:17:43.408376 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-19 02:17:43.408385 | orchestrator | 2026-03-19 02:17:43.408393 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 02:17:43.408401 | orchestrator | Thursday 19 March 2026 02:17:35 +0000 (0:00:00.452) 0:00:00.897 ******** 2026-03-19 02:17:43.408409 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:17:43.408418 | orchestrator | 2026-03-19 02:17:43.408426 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 02:17:43.408434 | orchestrator | Thursday 19 March 2026 02:17:36 +0000 (0:00:00.478) 0:00:01.375 ******** 2026-03-19 02:17:43.408442 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:43.408449 | orchestrator | 2026-03-19 02:17:43.408457 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-19 02:17:43.408465 | orchestrator | Thursday 19 March 2026 02:17:37 +0000 (0:00:00.969) 0:00:02.345 ******** 2026-03-19 02:17:43.408473 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408482 | orchestrator | 2026-03-19 02:17:43.408490 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-19 02:17:43.408498 | orchestrator | Thursday 19 March 2026 02:17:37 +0000 (0:00:00.328) 0:00:02.673 ******** 2026-03-19 02:17:43.408506 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408514 | orchestrator | 2026-03-19 02:17:43.408522 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-19 02:17:43.408530 | orchestrator | Thursday 19 March 2026 02:17:37 +0000 (0:00:00.338) 0:00:03.011 ******** 2026-03-19 02:17:43.408538 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408545 | orchestrator | 2026-03-19 02:17:43.408553 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-19 02:17:43.408561 | orchestrator | Thursday 19 March 2026 02:17:38 +0000 (0:00:00.340) 0:00:03.351 ******** 2026-03-19 02:17:43.408569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408577 | orchestrator | 2026-03-19 02:17:43.408585 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 02:17:43.408592 | orchestrator | Thursday 19 March 2026 02:17:38 +0000 (0:00:00.439) 0:00:03.791 ******** 2026-03-19 02:17:43.408616 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:17:43.408624 | orchestrator | 2026-03-19 02:17:43.408655 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 02:17:43.408665 | orchestrator | Thursday 19 March 2026 02:17:39 +0000 (0:00:00.835) 0:00:04.627 ******** 2026-03-19 02:17:43.408675 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:17:43.408684 | orchestrator | 2026-03-19 02:17:43.408692 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-19 02:17:43.408701 | orchestrator | Thursday 19 March 2026 02:17:40 +0000 (0:00:00.862) 0:00:05.489 ******** 2026-03-19 02:17:43.408711 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408719 | orchestrator | 2026-03-19 02:17:43.408728 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-19 02:17:43.408737 | orchestrator | Thursday 19 March 2026 02:17:40 +0000 (0:00:00.391) 0:00:05.881 ******** 2026-03-19 02:17:43.408746 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:17:43.408755 | orchestrator | 2026-03-19 02:17:43.408765 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-19 02:17:43.408773 | orchestrator | Thursday 19 March 2026 02:17:40 +0000 (0:00:00.348) 0:00:06.229 ******** 2026-03-19 02:17:43.408804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:17:43.408819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:17:43.408830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:17:43.408846 | orchestrator | 2026-03-19 02:17:43.408868 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-19 02:17:43.408878 | orchestrator | Thursday 19 March 2026 02:17:41 +0000 (0:00:00.781) 0:00:07.011 ******** 2026-03-19 02:17:43.408888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:17:43.408906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:18:01.374549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:18:01.374650 | orchestrator | 2026-03-19 02:18:01.374660 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-19 02:18:01.374668 | orchestrator | Thursday 19 March 2026 02:17:43 +0000 (0:00:01.675) 0:00:08.686 ******** 2026-03-19 02:18:01.374693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 02:18:01.374701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 02:18:01.374706 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 02:18:01.374712 | orchestrator | 2026-03-19 02:18:01.374717 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-19 02:18:01.374723 | orchestrator | Thursday 19 March 2026 02:17:44 +0000 (0:00:01.474) 0:00:10.161 ******** 2026-03-19 02:18:01.374728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 02:18:01.374747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 02:18:01.374753 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 02:18:01.374758 | orchestrator | 2026-03-19 02:18:01.374764 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-19 02:18:01.374769 | orchestrator | Thursday 19 March 2026 02:17:46 +0000 (0:00:01.607) 0:00:11.768 ******** 2026-03-19 02:18:01.374775 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 02:18:01.374780 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 02:18:01.374785 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 02:18:01.374791 | orchestrator | 2026-03-19 02:18:01.374796 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-19 02:18:01.374802 | orchestrator | Thursday 19 March 2026 02:17:47 +0000 (0:00:01.363) 0:00:13.131 ******** 2026-03-19 02:18:01.374807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 02:18:01.374813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 02:18:01.374818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 02:18:01.374823 | orchestrator | 2026-03-19 02:18:01.374829 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-19 02:18:01.374834 | orchestrator | Thursday 19 March 2026 02:17:49 +0000 (0:00:01.637) 0:00:14.769 ******** 2026-03-19 02:18:01.374840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 02:18:01.374845 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 02:18:01.374851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 02:18:01.374856 | orchestrator | 2026-03-19 02:18:01.374862 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-19 02:18:01.374868 | orchestrator | Thursday 19 March 2026 02:17:50 +0000 (0:00:01.404) 0:00:16.173 ******** 2026-03-19 02:18:01.374873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 02:18:01.374890 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 02:18:01.374896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 02:18:01.374909 | orchestrator | 2026-03-19 02:18:01.374915 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 02:18:01.374920 | orchestrator | Thursday 19 March 2026 02:17:52 +0000 (0:00:01.323) 0:00:17.497 ******** 2026-03-19 02:18:01.374926 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:18:01.374933 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:18:01.374951 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:18:01.374962 | orchestrator | 2026-03-19 02:18:01.374967 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-19 02:18:01.374973 | orchestrator | Thursday 19 March 2026 02:17:52 +0000 (0:00:00.386) 0:00:17.883 ******** 2026-03-19 02:18:01.374979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:18:01.374990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:18:01.374996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 02:18:01.375002 | orchestrator | 2026-03-19 02:18:01.375008 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-19 02:18:01.375013 | orchestrator | Thursday 19 March 2026 02:17:53 +0000 (0:00:01.128) 0:00:19.011 ******** 2026-03-19 02:18:01.375019 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:18:01.375024 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:18:01.375030 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:18:01.375035 | orchestrator | 2026-03-19 02:18:01.375041 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-19 02:18:01.375050 | orchestrator | Thursday 19 March 2026 02:17:54 +0000 (0:00:00.871) 0:00:19.882 ******** 2026-03-19 02:18:01.375056 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:18:01.375061 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:18:01.375067 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:18:01.375072 | orchestrator | 2026-03-19 02:18:01.375078 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-19 02:18:01.375086 | orchestrator | Thursday 19 March 2026 02:18:01 +0000 (0:00:06.766) 0:00:26.649 ******** 2026-03-19 02:19:41.147999 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:19:41.148291 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:19:41.148303 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:19:41.148310 | orchestrator | 2026-03-19 02:19:41.148318 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 02:19:41.148326 | orchestrator | 2026-03-19 02:19:41.148332 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 02:19:41.148339 | orchestrator | Thursday 19 March 2026 02:18:01 +0000 (0:00:00.487) 0:00:27.136 ******** 2026-03-19 02:19:41.148346 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:19:41.148354 | orchestrator | 2026-03-19 02:19:41.148361 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 02:19:41.148368 | orchestrator | Thursday 19 March 2026 02:18:02 +0000 (0:00:00.628) 0:00:27.765 ******** 2026-03-19 02:19:41.148374 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:19:41.148380 | orchestrator | 2026-03-19 02:19:41.148387 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 02:19:41.148393 | orchestrator | Thursday 19 March 2026 02:18:02 +0000 (0:00:00.241) 0:00:28.007 ******** 2026-03-19 02:19:41.148399 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:19:41.148406 | orchestrator | 2026-03-19 02:19:41.148412 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 02:19:41.148418 | orchestrator | Thursday 19 March 2026 02:18:09 +0000 (0:00:06.650) 0:00:34.658 ******** 2026-03-19 02:19:41.148425 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:19:41.148432 | orchestrator | 2026-03-19 02:19:41.148438 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 02:19:41.148444 | orchestrator | 2026-03-19 02:19:41.148451 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 02:19:41.148457 | orchestrator | Thursday 19 March 2026 02:19:00 +0000 (0:00:51.380) 0:01:26.038 ******** 2026-03-19 02:19:41.148463 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:19:41.148470 | orchestrator | 2026-03-19 02:19:41.148476 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 02:19:41.148482 | orchestrator | Thursday 19 March 2026 02:19:01 +0000 (0:00:00.591) 0:01:26.630 ******** 2026-03-19 02:19:41.148489 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:19:41.148495 | orchestrator | 2026-03-19 02:19:41.148501 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 02:19:41.148508 | orchestrator | Thursday 19 March 2026 02:19:01 +0000 (0:00:00.213) 0:01:26.844 ******** 2026-03-19 02:19:41.148514 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:19:41.148521 | orchestrator | 2026-03-19 02:19:41.148527 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 02:19:41.148550 | orchestrator | Thursday 19 March 2026 02:19:08 +0000 (0:00:06.564) 0:01:33.409 ******** 2026-03-19 02:19:41.148558 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:19:41.148565 | orchestrator | 2026-03-19 02:19:41.148581 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 02:19:41.148589 | orchestrator | 2026-03-19 02:19:41.148596 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 02:19:41.148603 | orchestrator | Thursday 19 March 2026 02:19:18 +0000 (0:00:10.664) 0:01:44.073 ******** 2026-03-19 02:19:41.148610 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:19:41.148618 | orchestrator | 2026-03-19 02:19:41.148624 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 02:19:41.148654 | orchestrator | Thursday 19 March 2026 02:19:19 +0000 (0:00:00.773) 0:01:44.847 ******** 2026-03-19 02:19:41.148662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:19:41.148669 | orchestrator | 2026-03-19 02:19:41.148677 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 02:19:41.148683 | orchestrator | Thursday 19 March 2026 02:19:19 +0000 (0:00:00.221) 0:01:45.069 ******** 2026-03-19 02:19:41.148690 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:19:41.148698 | orchestrator | 2026-03-19 02:19:41.148705 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 02:19:41.148713 | orchestrator | Thursday 19 March 2026 02:19:21 +0000 (0:00:01.652) 0:01:46.721 ******** 2026-03-19 02:19:41.148720 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:19:41.148728 | orchestrator | 2026-03-19 02:19:41.148735 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-19 02:19:41.148742 | orchestrator | 2026-03-19 02:19:41.148749 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-19 02:19:41.148756 | orchestrator | Thursday 19 March 2026 02:19:37 +0000 (0:00:16.417) 0:02:03.139 ******** 2026-03-19 02:19:41.148763 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:19:41.148769 | orchestrator | 2026-03-19 02:19:41.148775 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-19 02:19:41.148782 | orchestrator | Thursday 19 March 2026 02:19:38 +0000 (0:00:00.497) 0:02:03.636 ******** 2026-03-19 02:19:41.148789 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 02:19:41.148795 | orchestrator | enable_outward_rabbitmq_True 2026-03-19 02:19:41.148802 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 02:19:41.148808 | orchestrator | outward_rabbitmq_restart 2026-03-19 02:19:41.148815 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:19:41.148821 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:19:41.148827 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:19:41.148834 | orchestrator | 2026-03-19 02:19:41.148840 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-19 02:19:41.148847 | orchestrator | skipping: no hosts matched 2026-03-19 02:19:41.148853 | orchestrator | 2026-03-19 02:19:41.148860 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-19 02:19:41.148866 | orchestrator | skipping: no hosts matched 2026-03-19 02:19:41.148872 | orchestrator | 2026-03-19 02:19:41.148879 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-19 02:19:41.148885 | orchestrator | skipping: no hosts matched 2026-03-19 02:19:41.148892 | orchestrator | 2026-03-19 02:19:41.148898 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:19:41.148921 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 02:19:41.148929 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:19:41.148936 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:19:41.148942 | orchestrator | 2026-03-19 02:19:41.148948 | orchestrator | 2026-03-19 02:19:41.148955 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:19:41.148961 | orchestrator | Thursday 19 March 2026 02:19:40 +0000 (0:00:02.446) 0:02:06.083 ******** 2026-03-19 02:19:41.148968 | orchestrator | =============================================================================== 2026-03-19 02:19:41.148974 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.46s 2026-03-19 02:19:41.148981 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.87s 2026-03-19 02:19:41.149005 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.77s 2026-03-19 02:19:41.149011 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.45s 2026-03-19 02:19:41.149018 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.99s 2026-03-19 02:19:41.149024 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.68s 2026-03-19 02:19:41.149031 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.64s 2026-03-19 02:19:41.149060 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.61s 2026-03-19 02:19:41.149067 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2026-03-19 02:19:41.149073 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.40s 2026-03-19 02:19:41.149080 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.36s 2026-03-19 02:19:41.149086 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.32s 2026-03-19 02:19:41.149093 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.13s 2026-03-19 02:19:41.149099 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2026-03-19 02:19:41.149110 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.87s 2026-03-19 02:19:41.149116 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2026-03-19 02:19:41.149123 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2026-03-19 02:19:41.149129 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.78s 2026-03-19 02:19:41.149136 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.68s 2026-03-19 02:19:41.149142 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.50s 2026-03-19 02:19:43.705146 | orchestrator | 2026-03-19 02:19:43 | INFO  | Task 3ba5cfb2-4df9-45b1-a717-f71200f5bd8a (openvswitch) was prepared for execution. 2026-03-19 02:19:43.705244 | orchestrator | 2026-03-19 02:19:43 | INFO  | It takes a moment until task 3ba5cfb2-4df9-45b1-a717-f71200f5bd8a (openvswitch) has been started and output is visible here. 2026-03-19 02:19:56.256870 | orchestrator | 2026-03-19 02:19:56.256968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:19:56.256979 | orchestrator | 2026-03-19 02:19:56.256986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:19:56.256992 | orchestrator | Thursday 19 March 2026 02:19:47 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-19 02:19:56.256998 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:19:56.257005 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:19:56.257011 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:19:56.257074 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:19:56.257082 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:19:56.257092 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:19:56.257102 | orchestrator | 2026-03-19 02:19:56.257111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:19:56.257121 | orchestrator | Thursday 19 March 2026 02:19:48 +0000 (0:00:00.720) 0:00:00.970 ******** 2026-03-19 02:19:56.257131 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257142 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257151 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257161 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257171 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257179 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 02:19:56.257186 | orchestrator | 2026-03-19 02:19:56.257217 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-19 02:19:56.257223 | orchestrator | 2026-03-19 02:19:56.257229 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-19 02:19:56.257235 | orchestrator | Thursday 19 March 2026 02:19:49 +0000 (0:00:00.581) 0:00:01.551 ******** 2026-03-19 02:19:56.257242 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:19:56.257250 | orchestrator | 2026-03-19 02:19:56.257256 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 02:19:56.257262 | orchestrator | Thursday 19 March 2026 02:19:50 +0000 (0:00:01.123) 0:00:02.675 ******** 2026-03-19 02:19:56.257268 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-19 02:19:56.257274 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-19 02:19:56.257280 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-19 02:19:56.257286 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-19 02:19:56.257291 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-19 02:19:56.257297 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-19 02:19:56.257302 | orchestrator | 2026-03-19 02:19:56.257308 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 02:19:56.257314 | orchestrator | Thursday 19 March 2026 02:19:51 +0000 (0:00:01.183) 0:00:03.859 ******** 2026-03-19 02:19:56.257319 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-19 02:19:56.257325 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-19 02:19:56.257331 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-19 02:19:56.257336 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-19 02:19:56.257342 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-19 02:19:56.257347 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-19 02:19:56.257353 | orchestrator | 2026-03-19 02:19:56.257359 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 02:19:56.257364 | orchestrator | Thursday 19 March 2026 02:19:53 +0000 (0:00:01.592) 0:00:05.451 ******** 2026-03-19 02:19:56.257370 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-19 02:19:56.257376 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:19:56.257382 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-19 02:19:56.257396 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:19:56.257402 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-19 02:19:56.257408 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:19:56.257414 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-19 02:19:56.257419 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:19:56.257425 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-19 02:19:56.257432 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:19:56.257438 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-19 02:19:56.257444 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:19:56.257451 | orchestrator | 2026-03-19 02:19:56.257458 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-19 02:19:56.257465 | orchestrator | Thursday 19 March 2026 02:19:54 +0000 (0:00:01.095) 0:00:06.547 ******** 2026-03-19 02:19:56.257471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:19:56.257478 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:19:56.257485 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:19:56.257491 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:19:56.257498 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:19:56.257504 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:19:56.257511 | orchestrator | 2026-03-19 02:19:56.257518 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-19 02:19:56.257530 | orchestrator | Thursday 19 March 2026 02:19:54 +0000 (0:00:00.688) 0:00:07.236 ******** 2026-03-19 02:19:56.257554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:56.257564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:56.257571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:56.257623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:56.257634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:56.257646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548462 | orchestrator | 2026-03-19 02:19:58.548480 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-19 02:19:58.548497 | orchestrator | Thursday 19 March 2026 02:19:56 +0000 (0:00:01.473) 0:00:08.710 ******** 2026-03-19 02:19:58.548514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:19:58.548647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215699 | orchestrator | 2026-03-19 02:20:01.215712 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-19 02:20:01.215724 | orchestrator | Thursday 19 March 2026 02:19:58 +0000 (0:00:02.287) 0:00:10.998 ******** 2026-03-19 02:20:01.215734 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:20:01.215747 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:20:01.215757 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:20:01.215767 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:20:01.215777 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:20:01.215788 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:20:01.215799 | orchestrator | 2026-03-19 02:20:01.215810 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-19 02:20:01.215821 | orchestrator | Thursday 19 March 2026 02:19:59 +0000 (0:00:00.902) 0:00:11.900 ******** 2026-03-19 02:20:01.215832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:01.215901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 02:20:25.200852 | orchestrator | 2026-03-19 02:20:25.200861 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200867 | orchestrator | Thursday 19 March 2026 02:20:01 +0000 (0:00:01.773) 0:00:13.674 ******** 2026-03-19 02:20:25.200871 | orchestrator | 2026-03-19 02:20:25.200875 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200878 | orchestrator | Thursday 19 March 2026 02:20:01 +0000 (0:00:00.296) 0:00:13.971 ******** 2026-03-19 02:20:25.200882 | orchestrator | 2026-03-19 02:20:25.200891 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200895 | orchestrator | Thursday 19 March 2026 02:20:01 +0000 (0:00:00.131) 0:00:14.102 ******** 2026-03-19 02:20:25.200899 | orchestrator | 2026-03-19 02:20:25.200903 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200907 | orchestrator | Thursday 19 March 2026 02:20:01 +0000 (0:00:00.132) 0:00:14.235 ******** 2026-03-19 02:20:25.200910 | orchestrator | 2026-03-19 02:20:25.200914 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200918 | orchestrator | Thursday 19 March 2026 02:20:01 +0000 (0:00:00.130) 0:00:14.365 ******** 2026-03-19 02:20:25.200922 | orchestrator | 2026-03-19 02:20:25.200926 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 02:20:25.200929 | orchestrator | Thursday 19 March 2026 02:20:02 +0000 (0:00:00.126) 0:00:14.492 ******** 2026-03-19 02:20:25.200933 | orchestrator | 2026-03-19 02:20:25.200938 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-19 02:20:25.200944 | orchestrator | Thursday 19 March 2026 02:20:02 +0000 (0:00:00.125) 0:00:14.617 ******** 2026-03-19 02:20:25.200950 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:20:25.200957 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:20:25.200963 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:20:25.200969 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:20:25.200975 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:20:25.201009 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:20:25.201015 | orchestrator | 2026-03-19 02:20:25.201022 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-19 02:20:25.201029 | orchestrator | Thursday 19 March 2026 02:20:09 +0000 (0:00:07.038) 0:00:21.656 ******** 2026-03-19 02:20:25.201035 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:20:25.201047 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:20:25.201053 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:20:25.201059 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:20:25.201064 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:20:25.201068 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:20:25.201072 | orchestrator | 2026-03-19 02:20:25.201076 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-19 02:20:25.201080 | orchestrator | Thursday 19 March 2026 02:20:10 +0000 (0:00:01.084) 0:00:22.741 ******** 2026-03-19 02:20:25.201083 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:20:25.201087 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:20:25.201091 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:20:25.201095 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:20:25.201098 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:20:25.201102 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:20:25.201106 | orchestrator | 2026-03-19 02:20:25.201110 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-19 02:20:25.201113 | orchestrator | Thursday 19 March 2026 02:20:18 +0000 (0:00:07.966) 0:00:30.708 ******** 2026-03-19 02:20:25.201117 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-19 02:20:25.201122 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-19 02:20:25.201126 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-19 02:20:25.201129 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-19 02:20:25.201133 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-19 02:20:25.201137 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-19 02:20:25.201141 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-19 02:20:25.201153 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-19 02:20:38.227298 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-19 02:20:38.227386 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-19 02:20:38.227393 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-19 02:20:38.227398 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-19 02:20:38.227402 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227406 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227410 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227414 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227417 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227421 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 02:20:38.227425 | orchestrator | 2026-03-19 02:20:38.227430 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-19 02:20:38.227435 | orchestrator | Thursday 19 March 2026 02:20:25 +0000 (0:00:06.861) 0:00:37.569 ******** 2026-03-19 02:20:38.227441 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-19 02:20:38.227445 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:20:38.227450 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-19 02:20:38.227454 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:20:38.227458 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-19 02:20:38.227462 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:20:38.227466 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-19 02:20:38.227470 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-19 02:20:38.227474 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-19 02:20:38.227478 | orchestrator | 2026-03-19 02:20:38.227482 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-19 02:20:38.227485 | orchestrator | Thursday 19 March 2026 02:20:27 +0000 (0:00:02.520) 0:00:40.090 ******** 2026-03-19 02:20:38.227489 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-19 02:20:38.227493 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:20:38.227497 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-19 02:20:38.227501 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:20:38.227505 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-19 02:20:38.227509 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:20:38.227513 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-19 02:20:38.227517 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-19 02:20:38.227533 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-19 02:20:38.227537 | orchestrator | 2026-03-19 02:20:38.227541 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-19 02:20:38.227545 | orchestrator | Thursday 19 March 2026 02:20:30 +0000 (0:00:03.095) 0:00:43.185 ******** 2026-03-19 02:20:38.227549 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:20:38.227552 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:20:38.227574 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:20:38.227578 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:20:38.227582 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:20:38.227585 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:20:38.227589 | orchestrator | 2026-03-19 02:20:38.227593 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:20:38.227598 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 02:20:38.227603 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 02:20:38.227607 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 02:20:38.227611 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:20:38.227615 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:20:38.227618 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:20:38.227622 | orchestrator | 2026-03-19 02:20:38.227626 | orchestrator | 2026-03-19 02:20:38.227630 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:20:38.227634 | orchestrator | Thursday 19 March 2026 02:20:37 +0000 (0:00:07.044) 0:00:50.230 ******** 2026-03-19 02:20:38.227648 | orchestrator | =============================================================================== 2026-03-19 02:20:38.227653 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.01s 2026-03-19 02:20:38.227657 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.04s 2026-03-19 02:20:38.227660 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.86s 2026-03-19 02:20:38.227664 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.10s 2026-03-19 02:20:38.227668 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.52s 2026-03-19 02:20:38.227672 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.29s 2026-03-19 02:20:38.227675 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.77s 2026-03-19 02:20:38.227679 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.59s 2026-03-19 02:20:38.227683 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.47s 2026-03-19 02:20:38.227687 | orchestrator | module-load : Load modules ---------------------------------------------- 1.18s 2026-03-19 02:20:38.227691 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.12s 2026-03-19 02:20:38.227695 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.10s 2026-03-19 02:20:38.227698 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.08s 2026-03-19 02:20:38.227702 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.94s 2026-03-19 02:20:38.227707 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.90s 2026-03-19 02:20:38.227713 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-03-19 02:20:38.227719 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.69s 2026-03-19 02:20:38.227725 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-03-19 02:20:40.501201 | orchestrator | 2026-03-19 02:20:40 | INFO  | Task f65bd3cb-a85c-40be-b22a-bf3a1c1700c5 (ovn) was prepared for execution. 2026-03-19 02:20:40.501295 | orchestrator | 2026-03-19 02:20:40 | INFO  | It takes a moment until task f65bd3cb-a85c-40be-b22a-bf3a1c1700c5 (ovn) has been started and output is visible here. 2026-03-19 02:20:50.891692 | orchestrator | 2026-03-19 02:20:50.891837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:20:50.891864 | orchestrator | 2026-03-19 02:20:50.891882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:20:50.891899 | orchestrator | Thursday 19 March 2026 02:20:44 +0000 (0:00:00.162) 0:00:00.162 ******** 2026-03-19 02:20:50.891916 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:20:50.891934 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:20:50.892014 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:20:50.892031 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:20:50.892047 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:20:50.892063 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:20:50.892078 | orchestrator | 2026-03-19 02:20:50.892095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:20:50.892111 | orchestrator | Thursday 19 March 2026 02:20:45 +0000 (0:00:00.686) 0:00:00.849 ******** 2026-03-19 02:20:50.892150 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-19 02:20:50.892169 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-19 02:20:50.892185 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-19 02:20:50.892201 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-19 02:20:50.892216 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-19 02:20:50.892231 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-19 02:20:50.892247 | orchestrator | 2026-03-19 02:20:50.892263 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-19 02:20:50.892280 | orchestrator | 2026-03-19 02:20:50.892296 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-19 02:20:50.892313 | orchestrator | Thursday 19 March 2026 02:20:45 +0000 (0:00:00.790) 0:00:01.639 ******** 2026-03-19 02:20:50.892330 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:20:50.892348 | orchestrator | 2026-03-19 02:20:50.892364 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-19 02:20:50.892380 | orchestrator | Thursday 19 March 2026 02:20:47 +0000 (0:00:01.078) 0:00:02.718 ******** 2026-03-19 02:20:50.892399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892567 | orchestrator | 2026-03-19 02:20:50.892584 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-19 02:20:50.892601 | orchestrator | Thursday 19 March 2026 02:20:48 +0000 (0:00:01.156) 0:00:03.875 ******** 2026-03-19 02:20:50.892628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892737 | orchestrator | 2026-03-19 02:20:50.892754 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-19 02:20:50.892771 | orchestrator | Thursday 19 March 2026 02:20:49 +0000 (0:00:01.512) 0:00:05.387 ******** 2026-03-19 02:20:50.892788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:20:50.892835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553802 | orchestrator | 2026-03-19 02:21:15.553810 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-19 02:21:15.553818 | orchestrator | Thursday 19 March 2026 02:20:50 +0000 (0:00:01.162) 0:00:06.549 ******** 2026-03-19 02:21:15.553825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553898 | orchestrator | 2026-03-19 02:21:15.553904 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-19 02:21:15.553910 | orchestrator | Thursday 19 March 2026 02:20:52 +0000 (0:00:01.510) 0:00:08.060 ******** 2026-03-19 02:21:15.553962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.553995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.554001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:21:15.554008 | orchestrator | 2026-03-19 02:21:15.554057 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-19 02:21:15.554066 | orchestrator | Thursday 19 March 2026 02:20:53 +0000 (0:00:01.307) 0:00:09.367 ******** 2026-03-19 02:21:15.554073 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:21:15.554080 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:21:15.554087 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:21:15.554093 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:21:15.554099 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:21:15.554105 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:21:15.554112 | orchestrator | 2026-03-19 02:21:15.554118 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-19 02:21:15.554124 | orchestrator | Thursday 19 March 2026 02:20:56 +0000 (0:00:02.555) 0:00:11.923 ******** 2026-03-19 02:21:15.554130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-19 02:21:15.554137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-19 02:21:15.554144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-19 02:21:15.554151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-19 02:21:15.554158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-19 02:21:15.554165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-19 02:21:15.554178 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 02:21:49.496536 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496552 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496615 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-19 02:21:49.496658 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496675 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496689 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496717 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 02:21:49.496747 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496793 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496807 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 02:21:49.496838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.496856 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.496872 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.497006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.497022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.497038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 02:21:49.497055 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 02:21:49.497073 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 02:21:49.497091 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 02:21:49.497122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 02:21:49.497151 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 02:21:49.497169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 02:21:49.497186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-19 02:21:49.497243 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-19 02:21:49.497262 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-19 02:21:49.497288 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-19 02:21:49.497303 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-19 02:21:49.497317 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-19 02:21:49.497332 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 02:21:49.497347 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 02:21:49.497362 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 02:21:49.497377 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 02:21:49.497395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 02:21:49.497408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 02:21:49.497422 | orchestrator | 2026-03-19 02:21:49.497438 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497455 | orchestrator | Thursday 19 March 2026 02:21:14 +0000 (0:00:18.728) 0:00:30.651 ******** 2026-03-19 02:21:49.497474 | orchestrator | 2026-03-19 02:21:49.497487 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497501 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.235) 0:00:30.887 ******** 2026-03-19 02:21:49.497518 | orchestrator | 2026-03-19 02:21:49.497532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497546 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.063) 0:00:30.951 ******** 2026-03-19 02:21:49.497561 | orchestrator | 2026-03-19 02:21:49.497575 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497589 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.064) 0:00:31.016 ******** 2026-03-19 02:21:49.497604 | orchestrator | 2026-03-19 02:21:49.497618 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497632 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.063) 0:00:31.080 ******** 2026-03-19 02:21:49.497646 | orchestrator | 2026-03-19 02:21:49.497660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 02:21:49.497675 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.064) 0:00:31.144 ******** 2026-03-19 02:21:49.497690 | orchestrator | 2026-03-19 02:21:49.497705 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-19 02:21:49.497720 | orchestrator | Thursday 19 March 2026 02:21:15 +0000 (0:00:00.065) 0:00:31.209 ******** 2026-03-19 02:21:49.497735 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:21:49.497751 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:21:49.497766 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:21:49.497781 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:21:49.497795 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:21:49.497809 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:21:49.497824 | orchestrator | 2026-03-19 02:21:49.497838 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-19 02:21:49.497852 | orchestrator | Thursday 19 March 2026 02:21:17 +0000 (0:00:01.600) 0:00:32.810 ******** 2026-03-19 02:21:49.497903 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:21:49.497919 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:21:49.497933 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:21:49.497947 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:21:49.497962 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:21:49.497977 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:21:49.497993 | orchestrator | 2026-03-19 02:21:49.498008 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-19 02:21:49.498097 | orchestrator | 2026-03-19 02:21:49.498114 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 02:21:49.498130 | orchestrator | Thursday 19 March 2026 02:21:47 +0000 (0:00:30.179) 0:01:02.990 ******** 2026-03-19 02:21:49.498146 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:21:49.498163 | orchestrator | 2026-03-19 02:21:49.498180 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 02:21:49.498195 | orchestrator | Thursday 19 March 2026 02:21:47 +0000 (0:00:00.683) 0:01:03.673 ******** 2026-03-19 02:21:49.498210 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:21:49.498225 | orchestrator | 2026-03-19 02:21:49.498240 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-19 02:21:49.498255 | orchestrator | Thursday 19 March 2026 02:21:48 +0000 (0:00:00.549) 0:01:04.222 ******** 2026-03-19 02:21:49.498270 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:21:49.498286 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:21:49.498302 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:21:49.498318 | orchestrator | 2026-03-19 02:21:49.498333 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-19 02:21:49.498362 | orchestrator | Thursday 19 March 2026 02:21:49 +0000 (0:00:00.930) 0:01:05.153 ******** 2026-03-19 02:22:00.394336 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394420 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394427 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394431 | orchestrator | 2026-03-19 02:22:00.394436 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-19 02:22:00.394454 | orchestrator | Thursday 19 March 2026 02:21:49 +0000 (0:00:00.329) 0:01:05.482 ******** 2026-03-19 02:22:00.394458 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394461 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394465 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394469 | orchestrator | 2026-03-19 02:22:00.394473 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-19 02:22:00.394478 | orchestrator | Thursday 19 March 2026 02:21:50 +0000 (0:00:00.325) 0:01:05.808 ******** 2026-03-19 02:22:00.394485 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394491 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394497 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394503 | orchestrator | 2026-03-19 02:22:00.394509 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-19 02:22:00.394517 | orchestrator | Thursday 19 March 2026 02:21:50 +0000 (0:00:00.333) 0:01:06.142 ******** 2026-03-19 02:22:00.394524 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394539 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394544 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394548 | orchestrator | 2026-03-19 02:22:00.394552 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-19 02:22:00.394556 | orchestrator | Thursday 19 March 2026 02:21:50 +0000 (0:00:00.470) 0:01:06.612 ******** 2026-03-19 02:22:00.394560 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394566 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394569 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394573 | orchestrator | 2026-03-19 02:22:00.394577 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-19 02:22:00.394600 | orchestrator | Thursday 19 March 2026 02:21:51 +0000 (0:00:00.290) 0:01:06.903 ******** 2026-03-19 02:22:00.394604 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394607 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394611 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394615 | orchestrator | 2026-03-19 02:22:00.394619 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-19 02:22:00.394623 | orchestrator | Thursday 19 March 2026 02:21:51 +0000 (0:00:00.311) 0:01:07.215 ******** 2026-03-19 02:22:00.394627 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394630 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394634 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394638 | orchestrator | 2026-03-19 02:22:00.394644 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-19 02:22:00.394650 | orchestrator | Thursday 19 March 2026 02:21:51 +0000 (0:00:00.282) 0:01:07.497 ******** 2026-03-19 02:22:00.394656 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394663 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394669 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394676 | orchestrator | 2026-03-19 02:22:00.394682 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-19 02:22:00.394688 | orchestrator | Thursday 19 March 2026 02:21:52 +0000 (0:00:00.322) 0:01:07.820 ******** 2026-03-19 02:22:00.394694 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394700 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394704 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394708 | orchestrator | 2026-03-19 02:22:00.394712 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-19 02:22:00.394716 | orchestrator | Thursday 19 March 2026 02:21:52 +0000 (0:00:00.520) 0:01:08.340 ******** 2026-03-19 02:22:00.394719 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394723 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394731 | orchestrator | 2026-03-19 02:22:00.394735 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-19 02:22:00.394738 | orchestrator | Thursday 19 March 2026 02:21:52 +0000 (0:00:00.295) 0:01:08.636 ******** 2026-03-19 02:22:00.394742 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394746 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394750 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394753 | orchestrator | 2026-03-19 02:22:00.394757 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-19 02:22:00.394761 | orchestrator | Thursday 19 March 2026 02:21:53 +0000 (0:00:00.305) 0:01:08.942 ******** 2026-03-19 02:22:00.394765 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394769 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394772 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394776 | orchestrator | 2026-03-19 02:22:00.394780 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-19 02:22:00.394784 | orchestrator | Thursday 19 March 2026 02:21:53 +0000 (0:00:00.283) 0:01:09.225 ******** 2026-03-19 02:22:00.394787 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394791 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394795 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394799 | orchestrator | 2026-03-19 02:22:00.394803 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-19 02:22:00.394807 | orchestrator | Thursday 19 March 2026 02:21:54 +0000 (0:00:00.493) 0:01:09.718 ******** 2026-03-19 02:22:00.394810 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394814 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394818 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394822 | orchestrator | 2026-03-19 02:22:00.394826 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-19 02:22:00.394834 | orchestrator | Thursday 19 March 2026 02:21:54 +0000 (0:00:00.302) 0:01:10.021 ******** 2026-03-19 02:22:00.394838 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394842 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394846 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394849 | orchestrator | 2026-03-19 02:22:00.394853 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-19 02:22:00.394857 | orchestrator | Thursday 19 March 2026 02:21:54 +0000 (0:00:00.296) 0:01:10.318 ******** 2026-03-19 02:22:00.394900 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.394906 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.394910 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.394915 | orchestrator | 2026-03-19 02:22:00.394919 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 02:22:00.394927 | orchestrator | Thursday 19 March 2026 02:21:54 +0000 (0:00:00.288) 0:01:10.606 ******** 2026-03-19 02:22:00.394932 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:22:00.394936 | orchestrator | 2026-03-19 02:22:00.394941 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-19 02:22:00.394945 | orchestrator | Thursday 19 March 2026 02:21:55 +0000 (0:00:00.702) 0:01:11.309 ******** 2026-03-19 02:22:00.394950 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394954 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394959 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394963 | orchestrator | 2026-03-19 02:22:00.394967 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-19 02:22:00.394972 | orchestrator | Thursday 19 March 2026 02:21:56 +0000 (0:00:00.434) 0:01:11.743 ******** 2026-03-19 02:22:00.394977 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:00.394981 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:00.394985 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:00.394989 | orchestrator | 2026-03-19 02:22:00.394994 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-19 02:22:00.394998 | orchestrator | Thursday 19 March 2026 02:21:56 +0000 (0:00:00.427) 0:01:12.171 ******** 2026-03-19 02:22:00.395003 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395007 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395012 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395016 | orchestrator | 2026-03-19 02:22:00.395020 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-19 02:22:00.395025 | orchestrator | Thursday 19 March 2026 02:21:56 +0000 (0:00:00.341) 0:01:12.513 ******** 2026-03-19 02:22:00.395029 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395034 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395037 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395041 | orchestrator | 2026-03-19 02:22:00.395045 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-19 02:22:00.395049 | orchestrator | Thursday 19 March 2026 02:21:57 +0000 (0:00:00.538) 0:01:13.051 ******** 2026-03-19 02:22:00.395053 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395057 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395060 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395064 | orchestrator | 2026-03-19 02:22:00.395068 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-19 02:22:00.395072 | orchestrator | Thursday 19 March 2026 02:21:57 +0000 (0:00:00.320) 0:01:13.371 ******** 2026-03-19 02:22:00.395076 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395079 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395083 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395087 | orchestrator | 2026-03-19 02:22:00.395091 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-19 02:22:00.395094 | orchestrator | Thursday 19 March 2026 02:21:58 +0000 (0:00:00.343) 0:01:13.714 ******** 2026-03-19 02:22:00.395107 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395115 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395118 | orchestrator | 2026-03-19 02:22:00.395122 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-19 02:22:00.395126 | orchestrator | Thursday 19 March 2026 02:21:58 +0000 (0:00:00.341) 0:01:14.056 ******** 2026-03-19 02:22:00.395130 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:00.395133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:00.395137 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:00.395141 | orchestrator | 2026-03-19 02:22:00.395145 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-19 02:22:00.395148 | orchestrator | Thursday 19 March 2026 02:21:58 +0000 (0:00:00.543) 0:01:14.599 ******** 2026-03-19 02:22:00.395154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:00.395160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:00.395164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:00.395179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.560974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561175 | orchestrator | 2026-03-19 02:22:06.561189 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-19 02:22:06.561201 | orchestrator | Thursday 19 March 2026 02:22:00 +0000 (0:00:01.454) 0:01:16.054 ******** 2026-03-19 02:22:06.561214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561365 | orchestrator | 2026-03-19 02:22:06.561376 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-19 02:22:06.561387 | orchestrator | Thursday 19 March 2026 02:22:04 +0000 (0:00:03.708) 0:01:19.763 ******** 2026-03-19 02:22:06.561398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:06.561474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.928608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.928775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.928800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.928815 | orchestrator | 2026-03-19 02:22:35.928862 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:35.928874 | orchestrator | Thursday 19 March 2026 02:22:06 +0000 (0:00:02.089) 0:01:21.852 ******** 2026-03-19 02:22:35.928882 | orchestrator | 2026-03-19 02:22:35.928891 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:35.928898 | orchestrator | Thursday 19 March 2026 02:22:06 +0000 (0:00:00.064) 0:01:21.917 ******** 2026-03-19 02:22:35.928906 | orchestrator | 2026-03-19 02:22:35.928914 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:35.928922 | orchestrator | Thursday 19 March 2026 02:22:06 +0000 (0:00:00.235) 0:01:22.152 ******** 2026-03-19 02:22:35.928930 | orchestrator | 2026-03-19 02:22:35.928938 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-19 02:22:35.928945 | orchestrator | Thursday 19 March 2026 02:22:06 +0000 (0:00:00.064) 0:01:22.217 ******** 2026-03-19 02:22:35.928953 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:22:35.928963 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:22:35.928970 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:22:35.928978 | orchestrator | 2026-03-19 02:22:35.928986 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-19 02:22:35.928994 | orchestrator | Thursday 19 March 2026 02:22:14 +0000 (0:00:07.475) 0:01:29.693 ******** 2026-03-19 02:22:35.929002 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:22:35.929010 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:22:35.929017 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:22:35.929025 | orchestrator | 2026-03-19 02:22:35.929033 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-19 02:22:35.929041 | orchestrator | Thursday 19 March 2026 02:22:21 +0000 (0:00:07.513) 0:01:37.206 ******** 2026-03-19 02:22:35.929049 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:22:35.929056 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:22:35.929064 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:22:35.929072 | orchestrator | 2026-03-19 02:22:35.929080 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-19 02:22:35.929087 | orchestrator | Thursday 19 March 2026 02:22:28 +0000 (0:00:07.368) 0:01:44.574 ******** 2026-03-19 02:22:35.929095 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:22:35.929104 | orchestrator | 2026-03-19 02:22:35.929113 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-19 02:22:35.929122 | orchestrator | Thursday 19 March 2026 02:22:29 +0000 (0:00:00.137) 0:01:44.712 ******** 2026-03-19 02:22:35.929131 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:35.929141 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:35.929150 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:35.929159 | orchestrator | 2026-03-19 02:22:35.929168 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-19 02:22:35.929176 | orchestrator | Thursday 19 March 2026 02:22:30 +0000 (0:00:01.003) 0:01:45.716 ******** 2026-03-19 02:22:35.929185 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:35.929203 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:35.929212 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:22:35.929221 | orchestrator | 2026-03-19 02:22:35.929230 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-19 02:22:35.929239 | orchestrator | Thursday 19 March 2026 02:22:30 +0000 (0:00:00.612) 0:01:46.329 ******** 2026-03-19 02:22:35.929248 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:35.929257 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:35.929266 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:35.929275 | orchestrator | 2026-03-19 02:22:35.929289 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-19 02:22:35.929320 | orchestrator | Thursday 19 March 2026 02:22:31 +0000 (0:00:00.807) 0:01:47.137 ******** 2026-03-19 02:22:35.929335 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:22:35.929348 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:22:35.929361 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:22:35.929375 | orchestrator | 2026-03-19 02:22:35.929387 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-19 02:22:35.929400 | orchestrator | Thursday 19 March 2026 02:22:32 +0000 (0:00:00.639) 0:01:47.776 ******** 2026-03-19 02:22:35.929414 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:35.929429 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:35.929466 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:35.929481 | orchestrator | 2026-03-19 02:22:35.929496 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-19 02:22:35.929506 | orchestrator | Thursday 19 March 2026 02:22:33 +0000 (0:00:01.202) 0:01:48.979 ******** 2026-03-19 02:22:35.929513 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:35.929521 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:35.929529 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:35.929537 | orchestrator | 2026-03-19 02:22:35.929545 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-19 02:22:35.929553 | orchestrator | Thursday 19 March 2026 02:22:34 +0000 (0:00:00.760) 0:01:49.740 ******** 2026-03-19 02:22:35.929561 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:22:35.929568 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:22:35.929576 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:22:35.929584 | orchestrator | 2026-03-19 02:22:35.929591 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-19 02:22:35.929599 | orchestrator | Thursday 19 March 2026 02:22:34 +0000 (0:00:00.331) 0:01:50.072 ******** 2026-03-19 02:22:35.929609 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929619 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929636 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:35.929699 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951075 | orchestrator | 2026-03-19 02:22:42.951236 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-19 02:22:42.951257 | orchestrator | Thursday 19 March 2026 02:22:35 +0000 (0:00:01.512) 0:01:51.584 ******** 2026-03-19 02:22:42.951272 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951299 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951427 | orchestrator | 2026-03-19 02:22:42.951439 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-19 02:22:42.951450 | orchestrator | Thursday 19 March 2026 02:22:39 +0000 (0:00:03.755) 0:01:55.340 ******** 2026-03-19 02:22:42.951481 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951516 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951560 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 02:22:42.951598 | orchestrator | 2026-03-19 02:22:42.951610 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:42.951621 | orchestrator | Thursday 19 March 2026 02:22:42 +0000 (0:00:03.066) 0:01:58.406 ******** 2026-03-19 02:22:42.951632 | orchestrator | 2026-03-19 02:22:42.951643 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:42.951654 | orchestrator | Thursday 19 March 2026 02:22:42 +0000 (0:00:00.060) 0:01:58.467 ******** 2026-03-19 02:22:42.951665 | orchestrator | 2026-03-19 02:22:42.951676 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 02:22:42.951686 | orchestrator | Thursday 19 March 2026 02:22:42 +0000 (0:00:00.067) 0:01:58.535 ******** 2026-03-19 02:22:42.951697 | orchestrator | 2026-03-19 02:22:42.951715 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-19 02:23:06.841280 | orchestrator | Thursday 19 March 2026 02:22:42 +0000 (0:00:00.066) 0:01:58.601 ******** 2026-03-19 02:23:06.841403 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:23:06.841422 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:23:06.841435 | orchestrator | 2026-03-19 02:23:06.841447 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-19 02:23:06.841459 | orchestrator | Thursday 19 March 2026 02:22:49 +0000 (0:00:06.177) 0:02:04.778 ******** 2026-03-19 02:23:06.841470 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:23:06.841481 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:23:06.841492 | orchestrator | 2026-03-19 02:23:06.841503 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-19 02:23:06.841544 | orchestrator | Thursday 19 March 2026 02:22:55 +0000 (0:00:06.166) 0:02:10.945 ******** 2026-03-19 02:23:06.841555 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:23:06.841566 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:23:06.841577 | orchestrator | 2026-03-19 02:23:06.841588 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-19 02:23:06.841599 | orchestrator | Thursday 19 March 2026 02:23:01 +0000 (0:00:06.159) 0:02:17.105 ******** 2026-03-19 02:23:06.841610 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:23:06.841621 | orchestrator | 2026-03-19 02:23:06.841632 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-19 02:23:06.841642 | orchestrator | Thursday 19 March 2026 02:23:01 +0000 (0:00:00.134) 0:02:17.240 ******** 2026-03-19 02:23:06.841653 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:06.841665 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:06.841676 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:06.841687 | orchestrator | 2026-03-19 02:23:06.841698 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-19 02:23:06.841709 | orchestrator | Thursday 19 March 2026 02:23:02 +0000 (0:00:01.005) 0:02:18.245 ******** 2026-03-19 02:23:06.841720 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:23:06.841730 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:23:06.841741 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:23:06.841752 | orchestrator | 2026-03-19 02:23:06.841763 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-19 02:23:06.841774 | orchestrator | Thursday 19 March 2026 02:23:03 +0000 (0:00:00.622) 0:02:18.868 ******** 2026-03-19 02:23:06.841785 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:06.841828 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:06.841848 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:06.841866 | orchestrator | 2026-03-19 02:23:06.841886 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-19 02:23:06.841908 | orchestrator | Thursday 19 March 2026 02:23:03 +0000 (0:00:00.787) 0:02:19.655 ******** 2026-03-19 02:23:06.841929 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:23:06.841947 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:23:06.841961 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:23:06.841973 | orchestrator | 2026-03-19 02:23:06.841986 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-19 02:23:06.841999 | orchestrator | Thursday 19 March 2026 02:23:04 +0000 (0:00:00.634) 0:02:20.289 ******** 2026-03-19 02:23:06.842012 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:06.842082 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:06.842094 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:06.842107 | orchestrator | 2026-03-19 02:23:06.842120 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-19 02:23:06.842132 | orchestrator | Thursday 19 March 2026 02:23:05 +0000 (0:00:00.995) 0:02:21.285 ******** 2026-03-19 02:23:06.842145 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:06.842157 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:06.842169 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:06.842180 | orchestrator | 2026-03-19 02:23:06.842191 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:23:06.842204 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 02:23:06.842217 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-19 02:23:06.842228 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-19 02:23:06.842239 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:23:06.842264 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:23:06.842275 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:23:06.842286 | orchestrator | 2026-03-19 02:23:06.842297 | orchestrator | 2026-03-19 02:23:06.842324 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:23:06.842336 | orchestrator | Thursday 19 March 2026 02:23:06 +0000 (0:00:00.847) 0:02:22.132 ******** 2026-03-19 02:23:06.842347 | orchestrator | =============================================================================== 2026-03-19 02:23:06.842358 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.18s 2026-03-19 02:23:06.842369 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.73s 2026-03-19 02:23:06.842379 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.68s 2026-03-19 02:23:06.842391 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.65s 2026-03-19 02:23:06.842401 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.53s 2026-03-19 02:23:06.842432 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.76s 2026-03-19 02:23:06.842444 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.71s 2026-03-19 02:23:06.842455 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.07s 2026-03-19 02:23:06.842466 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.56s 2026-03-19 02:23:06.842476 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.09s 2026-03-19 02:23:06.842487 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.60s 2026-03-19 02:23:06.842498 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2026-03-19 02:23:06.842508 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-03-19 02:23:06.842519 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2026-03-19 02:23:06.842530 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-19 02:23:06.842541 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.31s 2026-03-19 02:23:06.842552 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.20s 2026-03-19 02:23:06.842562 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.16s 2026-03-19 02:23:06.842573 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.16s 2026-03-19 02:23:06.842584 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.08s 2026-03-19 02:23:07.125680 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 02:23:07.125780 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-19 02:23:09.286750 | orchestrator | 2026-03-19 02:23:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-19 02:23:19.465214 | orchestrator | 2026-03-19 02:23:19 | INFO  | Task 8af5cdfa-be38-419a-b077-e331b3f9e46f (wipe-partitions) was prepared for execution. 2026-03-19 02:23:19.465300 | orchestrator | 2026-03-19 02:23:19 | INFO  | It takes a moment until task 8af5cdfa-be38-419a-b077-e331b3f9e46f (wipe-partitions) has been started and output is visible here. 2026-03-19 02:23:32.325315 | orchestrator | 2026-03-19 02:23:32.325471 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-19 02:23:32.325498 | orchestrator | 2026-03-19 02:23:32.325511 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-19 02:23:32.325523 | orchestrator | Thursday 19 March 2026 02:23:23 +0000 (0:00:00.131) 0:00:00.131 ******** 2026-03-19 02:23:32.325562 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:23:32.325576 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:23:32.325587 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:23:32.325598 | orchestrator | 2026-03-19 02:23:32.325609 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-19 02:23:32.325620 | orchestrator | Thursday 19 March 2026 02:23:24 +0000 (0:00:00.592) 0:00:00.724 ******** 2026-03-19 02:23:32.325630 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:23:32.325641 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:23:32.325652 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:23:32.325663 | orchestrator | 2026-03-19 02:23:32.325674 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-19 02:23:32.325685 | orchestrator | Thursday 19 March 2026 02:23:24 +0000 (0:00:00.341) 0:00:01.065 ******** 2026-03-19 02:23:32.325696 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:23:32.325707 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:23:32.325718 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:23:32.325729 | orchestrator | 2026-03-19 02:23:32.325740 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-19 02:23:32.325750 | orchestrator | Thursday 19 March 2026 02:23:25 +0000 (0:00:00.620) 0:00:01.686 ******** 2026-03-19 02:23:32.325761 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:23:32.325772 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:23:32.325817 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:23:32.325828 | orchestrator | 2026-03-19 02:23:32.325839 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-19 02:23:32.325852 | orchestrator | Thursday 19 March 2026 02:23:25 +0000 (0:00:00.259) 0:00:01.945 ******** 2026-03-19 02:23:32.325864 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 02:23:32.325877 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 02:23:32.325890 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 02:23:32.325902 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 02:23:32.325914 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 02:23:32.325927 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 02:23:32.325960 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 02:23:32.325980 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 02:23:32.325998 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 02:23:32.326081 | orchestrator | 2026-03-19 02:23:32.326104 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-19 02:23:32.326121 | orchestrator | Thursday 19 March 2026 02:23:26 +0000 (0:00:01.308) 0:00:03.254 ******** 2026-03-19 02:23:32.326132 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 02:23:32.326143 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 02:23:32.326154 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 02:23:32.326165 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 02:23:32.326175 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 02:23:32.326186 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 02:23:32.326197 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 02:23:32.326207 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 02:23:32.326218 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 02:23:32.326229 | orchestrator | 2026-03-19 02:23:32.326240 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-19 02:23:32.326250 | orchestrator | Thursday 19 March 2026 02:23:28 +0000 (0:00:01.598) 0:00:04.852 ******** 2026-03-19 02:23:32.326261 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-19 02:23:32.326272 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-19 02:23:32.326283 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-19 02:23:32.326293 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-19 02:23:32.326319 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-19 02:23:32.326337 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-19 02:23:32.326355 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-19 02:23:32.326373 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-19 02:23:32.326390 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-19 02:23:32.326407 | orchestrator | 2026-03-19 02:23:32.326427 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-19 02:23:32.326446 | orchestrator | Thursday 19 March 2026 02:23:30 +0000 (0:00:02.228) 0:00:07.081 ******** 2026-03-19 02:23:32.326464 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:23:32.326482 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:23:32.326493 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:23:32.326504 | orchestrator | 2026-03-19 02:23:32.326515 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-19 02:23:32.326526 | orchestrator | Thursday 19 March 2026 02:23:31 +0000 (0:00:00.624) 0:00:07.705 ******** 2026-03-19 02:23:32.326537 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:23:32.326548 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:23:32.326558 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:23:32.326569 | orchestrator | 2026-03-19 02:23:32.326580 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:23:32.326592 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:32.326604 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:32.326636 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:32.326650 | orchestrator | 2026-03-19 02:23:32.326672 | orchestrator | 2026-03-19 02:23:32.326700 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:23:32.326718 | orchestrator | Thursday 19 March 2026 02:23:31 +0000 (0:00:00.694) 0:00:08.399 ******** 2026-03-19 02:23:32.326734 | orchestrator | =============================================================================== 2026-03-19 02:23:32.326751 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.23s 2026-03-19 02:23:32.326769 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2026-03-19 02:23:32.326856 | orchestrator | Check device availability ----------------------------------------------- 1.31s 2026-03-19 02:23:32.326875 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-03-19 02:23:32.326892 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-19 02:23:32.326911 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-03-19 02:23:32.326930 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-03-19 02:23:32.326949 | orchestrator | Remove all rook related logical devices --------------------------------- 0.34s 2026-03-19 02:23:32.326964 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-19 02:23:44.720633 | orchestrator | 2026-03-19 02:23:44 | INFO  | Task de22b3ed-ae2e-48d4-b357-366f63eb5d20 (facts) was prepared for execution. 2026-03-19 02:23:44.720856 | orchestrator | 2026-03-19 02:23:44 | INFO  | It takes a moment until task de22b3ed-ae2e-48d4-b357-366f63eb5d20 (facts) has been started and output is visible here. 2026-03-19 02:23:57.469456 | orchestrator | 2026-03-19 02:23:57.469617 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 02:23:57.469638 | orchestrator | 2026-03-19 02:23:57.469648 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 02:23:57.469658 | orchestrator | Thursday 19 March 2026 02:23:48 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-03-19 02:23:57.469693 | orchestrator | ok: [testbed-manager] 2026-03-19 02:23:57.469704 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:57.469713 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:57.469722 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:57.469731 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:23:57.469740 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:23:57.469748 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:23:57.469825 | orchestrator | 2026-03-19 02:23:57.469858 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 02:23:57.469878 | orchestrator | Thursday 19 March 2026 02:23:50 +0000 (0:00:01.135) 0:00:01.410 ******** 2026-03-19 02:23:57.469893 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:23:57.469908 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:23:57.469921 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:23:57.469934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:23:57.469948 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:23:57.469962 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:23:57.469976 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:23:57.469991 | orchestrator | 2026-03-19 02:23:57.470008 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 02:23:57.470081 | orchestrator | 2026-03-19 02:23:57.470093 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 02:23:57.470103 | orchestrator | Thursday 19 March 2026 02:23:51 +0000 (0:00:01.281) 0:00:02.691 ******** 2026-03-19 02:23:57.470114 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:23:57.470122 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:23:57.470131 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:23:57.470139 | orchestrator | ok: [testbed-manager] 2026-03-19 02:23:57.470148 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:23:57.470156 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:23:57.470165 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:23:57.470173 | orchestrator | 2026-03-19 02:23:57.470182 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 02:23:57.470191 | orchestrator | 2026-03-19 02:23:57.470200 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 02:23:57.470208 | orchestrator | Thursday 19 March 2026 02:23:56 +0000 (0:00:05.100) 0:00:07.792 ******** 2026-03-19 02:23:57.470217 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:23:57.470225 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:23:57.470234 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:23:57.470242 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:23:57.470251 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:23:57.470259 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:23:57.470268 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:23:57.470276 | orchestrator | 2026-03-19 02:23:57.470284 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:23:57.470294 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470354 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470364 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470373 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470381 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470390 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470410 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:23:57.470426 | orchestrator | 2026-03-19 02:23:57.470441 | orchestrator | 2026-03-19 02:23:57.470456 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:23:57.470472 | orchestrator | Thursday 19 March 2026 02:23:57 +0000 (0:00:00.543) 0:00:08.335 ******** 2026-03-19 02:23:57.470488 | orchestrator | =============================================================================== 2026-03-19 02:23:57.470503 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2026-03-19 02:23:57.470517 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-19 02:23:57.470526 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-19 02:23:57.470534 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-03-19 02:23:59.887936 | orchestrator | 2026-03-19 02:23:59 | INFO  | Task 2b150659-e4dc-4684-940c-d655799afd11 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-19 02:23:59.888074 | orchestrator | 2026-03-19 02:23:59 | INFO  | It takes a moment until task 2b150659-e4dc-4684-940c-d655799afd11 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-19 02:24:11.796497 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 02:24:11.796616 | orchestrator | 2.16.14 2026-03-19 02:24:11.796634 | orchestrator | 2026-03-19 02:24:11.796647 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 02:24:11.796659 | orchestrator | 2026-03-19 02:24:11.796671 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:24:11.796683 | orchestrator | Thursday 19 March 2026 02:24:04 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-03-19 02:24:11.796695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:11.796706 | orchestrator | 2026-03-19 02:24:11.796735 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:24:11.796808 | orchestrator | Thursday 19 March 2026 02:24:04 +0000 (0:00:00.248) 0:00:00.584 ******** 2026-03-19 02:24:11.796820 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:24:11.796831 | orchestrator | 2026-03-19 02:24:11.796842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.796853 | orchestrator | Thursday 19 March 2026 02:24:04 +0000 (0:00:00.235) 0:00:00.819 ******** 2026-03-19 02:24:11.796864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-19 02:24:11.796875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-19 02:24:11.796886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-19 02:24:11.796897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-19 02:24:11.796907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-19 02:24:11.796918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-19 02:24:11.796929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-19 02:24:11.796940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-19 02:24:11.796951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-19 02:24:11.796961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-19 02:24:11.796972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-19 02:24:11.796983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-19 02:24:11.797019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-19 02:24:11.797033 | orchestrator | 2026-03-19 02:24:11.797045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797057 | orchestrator | Thursday 19 March 2026 02:24:05 +0000 (0:00:00.485) 0:00:01.305 ******** 2026-03-19 02:24:11.797069 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797081 | orchestrator | 2026-03-19 02:24:11.797094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797106 | orchestrator | Thursday 19 March 2026 02:24:05 +0000 (0:00:00.200) 0:00:01.505 ******** 2026-03-19 02:24:11.797118 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797130 | orchestrator | 2026-03-19 02:24:11.797142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797155 | orchestrator | Thursday 19 March 2026 02:24:05 +0000 (0:00:00.189) 0:00:01.695 ******** 2026-03-19 02:24:11.797167 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797179 | orchestrator | 2026-03-19 02:24:11.797191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797203 | orchestrator | Thursday 19 March 2026 02:24:05 +0000 (0:00:00.216) 0:00:01.911 ******** 2026-03-19 02:24:11.797215 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797227 | orchestrator | 2026-03-19 02:24:11.797239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797252 | orchestrator | Thursday 19 March 2026 02:24:06 +0000 (0:00:00.201) 0:00:02.112 ******** 2026-03-19 02:24:11.797266 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797286 | orchestrator | 2026-03-19 02:24:11.797306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797334 | orchestrator | Thursday 19 March 2026 02:24:06 +0000 (0:00:00.208) 0:00:02.320 ******** 2026-03-19 02:24:11.797355 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797373 | orchestrator | 2026-03-19 02:24:11.797390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797408 | orchestrator | Thursday 19 March 2026 02:24:06 +0000 (0:00:00.212) 0:00:02.533 ******** 2026-03-19 02:24:11.797424 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797442 | orchestrator | 2026-03-19 02:24:11.797461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797479 | orchestrator | Thursday 19 March 2026 02:24:06 +0000 (0:00:00.202) 0:00:02.736 ******** 2026-03-19 02:24:11.797494 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.797505 | orchestrator | 2026-03-19 02:24:11.797516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797527 | orchestrator | Thursday 19 March 2026 02:24:06 +0000 (0:00:00.215) 0:00:02.952 ******** 2026-03-19 02:24:11.797538 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2) 2026-03-19 02:24:11.797550 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2) 2026-03-19 02:24:11.797560 | orchestrator | 2026-03-19 02:24:11.797571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797600 | orchestrator | Thursday 19 March 2026 02:24:07 +0000 (0:00:00.404) 0:00:03.356 ******** 2026-03-19 02:24:11.797611 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d) 2026-03-19 02:24:11.797622 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d) 2026-03-19 02:24:11.797633 | orchestrator | 2026-03-19 02:24:11.797643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797654 | orchestrator | Thursday 19 March 2026 02:24:08 +0000 (0:00:00.629) 0:00:03.985 ******** 2026-03-19 02:24:11.797673 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1) 2026-03-19 02:24:11.797696 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1) 2026-03-19 02:24:11.797707 | orchestrator | 2026-03-19 02:24:11.797718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797728 | orchestrator | Thursday 19 March 2026 02:24:08 +0000 (0:00:00.680) 0:00:04.665 ******** 2026-03-19 02:24:11.797739 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422) 2026-03-19 02:24:11.797774 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422) 2026-03-19 02:24:11.797785 | orchestrator | 2026-03-19 02:24:11.797796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:11.797807 | orchestrator | Thursday 19 March 2026 02:24:09 +0000 (0:00:00.851) 0:00:05.517 ******** 2026-03-19 02:24:11.797818 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:24:11.797829 | orchestrator | 2026-03-19 02:24:11.797839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.797850 | orchestrator | Thursday 19 March 2026 02:24:09 +0000 (0:00:00.332) 0:00:05.850 ******** 2026-03-19 02:24:11.797860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-19 02:24:11.797871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-19 02:24:11.797882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-19 02:24:11.797892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-19 02:24:11.797903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-19 02:24:11.797914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-19 02:24:11.797924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-19 02:24:11.797935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-19 02:24:11.797946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-19 02:24:11.797956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-19 02:24:11.797967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-19 02:24:11.797978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-19 02:24:11.797988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-19 02:24:11.797999 | orchestrator | 2026-03-19 02:24:11.798009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798081 | orchestrator | Thursday 19 March 2026 02:24:10 +0000 (0:00:00.401) 0:00:06.251 ******** 2026-03-19 02:24:11.798093 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798104 | orchestrator | 2026-03-19 02:24:11.798115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798125 | orchestrator | Thursday 19 March 2026 02:24:10 +0000 (0:00:00.225) 0:00:06.477 ******** 2026-03-19 02:24:11.798136 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798150 | orchestrator | 2026-03-19 02:24:11.798223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798245 | orchestrator | Thursday 19 March 2026 02:24:10 +0000 (0:00:00.226) 0:00:06.703 ******** 2026-03-19 02:24:11.798264 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798283 | orchestrator | 2026-03-19 02:24:11.798303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798322 | orchestrator | Thursday 19 March 2026 02:24:10 +0000 (0:00:00.215) 0:00:06.919 ******** 2026-03-19 02:24:11.798343 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798377 | orchestrator | 2026-03-19 02:24:11.798390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798400 | orchestrator | Thursday 19 March 2026 02:24:11 +0000 (0:00:00.208) 0:00:07.128 ******** 2026-03-19 02:24:11.798411 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798422 | orchestrator | 2026-03-19 02:24:11.798433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798444 | orchestrator | Thursday 19 March 2026 02:24:11 +0000 (0:00:00.236) 0:00:07.364 ******** 2026-03-19 02:24:11.798455 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798466 | orchestrator | 2026-03-19 02:24:11.798476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:11.798487 | orchestrator | Thursday 19 March 2026 02:24:11 +0000 (0:00:00.208) 0:00:07.573 ******** 2026-03-19 02:24:11.798498 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:11.798509 | orchestrator | 2026-03-19 02:24:11.798530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449592 | orchestrator | Thursday 19 March 2026 02:24:11 +0000 (0:00:00.197) 0:00:07.771 ******** 2026-03-19 02:24:19.449704 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.449716 | orchestrator | 2026-03-19 02:24:19.449725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449733 | orchestrator | Thursday 19 March 2026 02:24:11 +0000 (0:00:00.203) 0:00:07.974 ******** 2026-03-19 02:24:19.449815 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-19 02:24:19.449824 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-19 02:24:19.449832 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-19 02:24:19.449856 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-19 02:24:19.449864 | orchestrator | 2026-03-19 02:24:19.449870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449877 | orchestrator | Thursday 19 March 2026 02:24:13 +0000 (0:00:01.046) 0:00:09.021 ******** 2026-03-19 02:24:19.449885 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.449892 | orchestrator | 2026-03-19 02:24:19.449899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449907 | orchestrator | Thursday 19 March 2026 02:24:13 +0000 (0:00:00.204) 0:00:09.226 ******** 2026-03-19 02:24:19.449914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.449921 | orchestrator | 2026-03-19 02:24:19.449928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449935 | orchestrator | Thursday 19 March 2026 02:24:13 +0000 (0:00:00.190) 0:00:09.416 ******** 2026-03-19 02:24:19.449942 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.449949 | orchestrator | 2026-03-19 02:24:19.449955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:19.449962 | orchestrator | Thursday 19 March 2026 02:24:13 +0000 (0:00:00.219) 0:00:09.636 ******** 2026-03-19 02:24:19.449969 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.449976 | orchestrator | 2026-03-19 02:24:19.449983 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 02:24:19.449990 | orchestrator | Thursday 19 March 2026 02:24:13 +0000 (0:00:00.204) 0:00:09.840 ******** 2026-03-19 02:24:19.449997 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-19 02:24:19.450004 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-19 02:24:19.450011 | orchestrator | 2026-03-19 02:24:19.450058 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 02:24:19.450066 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.178) 0:00:10.019 ******** 2026-03-19 02:24:19.450073 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450081 | orchestrator | 2026-03-19 02:24:19.450088 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 02:24:19.450095 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.144) 0:00:10.163 ******** 2026-03-19 02:24:19.450124 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450131 | orchestrator | 2026-03-19 02:24:19.450138 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 02:24:19.450146 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.136) 0:00:10.300 ******** 2026-03-19 02:24:19.450153 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450161 | orchestrator | 2026-03-19 02:24:19.450170 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 02:24:19.450178 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.139) 0:00:10.439 ******** 2026-03-19 02:24:19.450186 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:24:19.450194 | orchestrator | 2026-03-19 02:24:19.450200 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 02:24:19.450206 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.137) 0:00:10.577 ******** 2026-03-19 02:24:19.450213 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55f97389-0425-5b31-8593-f3b3ad53d7f9'}}) 2026-03-19 02:24:19.450220 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '432058d8-20d3-534b-84ac-2a35b6cfcd9e'}}) 2026-03-19 02:24:19.450228 | orchestrator | 2026-03-19 02:24:19.450235 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 02:24:19.450245 | orchestrator | Thursday 19 March 2026 02:24:14 +0000 (0:00:00.165) 0:00:10.742 ******** 2026-03-19 02:24:19.450254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55f97389-0425-5b31-8593-f3b3ad53d7f9'}})  2026-03-19 02:24:19.450264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '432058d8-20d3-534b-84ac-2a35b6cfcd9e'}})  2026-03-19 02:24:19.450272 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450281 | orchestrator | 2026-03-19 02:24:19.450289 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 02:24:19.450297 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.335) 0:00:11.078 ******** 2026-03-19 02:24:19.450306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55f97389-0425-5b31-8593-f3b3ad53d7f9'}})  2026-03-19 02:24:19.450314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '432058d8-20d3-534b-84ac-2a35b6cfcd9e'}})  2026-03-19 02:24:19.450322 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450330 | orchestrator | 2026-03-19 02:24:19.450338 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 02:24:19.450346 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.168) 0:00:11.246 ******** 2026-03-19 02:24:19.450354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55f97389-0425-5b31-8593-f3b3ad53d7f9'}})  2026-03-19 02:24:19.450380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '432058d8-20d3-534b-84ac-2a35b6cfcd9e'}})  2026-03-19 02:24:19.450388 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450396 | orchestrator | 2026-03-19 02:24:19.450406 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 02:24:19.450414 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.163) 0:00:11.410 ******** 2026-03-19 02:24:19.450423 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:24:19.450433 | orchestrator | 2026-03-19 02:24:19.450442 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 02:24:19.450455 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.141) 0:00:11.551 ******** 2026-03-19 02:24:19.450465 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:24:19.450473 | orchestrator | 2026-03-19 02:24:19.450482 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 02:24:19.450490 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.157) 0:00:11.709 ******** 2026-03-19 02:24:19.450505 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450512 | orchestrator | 2026-03-19 02:24:19.450520 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 02:24:19.450528 | orchestrator | Thursday 19 March 2026 02:24:15 +0000 (0:00:00.150) 0:00:11.859 ******** 2026-03-19 02:24:19.450535 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450543 | orchestrator | 2026-03-19 02:24:19.450549 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 02:24:19.450557 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.134) 0:00:11.993 ******** 2026-03-19 02:24:19.450564 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450571 | orchestrator | 2026-03-19 02:24:19.450579 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 02:24:19.450586 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.139) 0:00:12.133 ******** 2026-03-19 02:24:19.450594 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:24:19.450602 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:19.450609 | orchestrator |  "sdb": { 2026-03-19 02:24:19.450617 | orchestrator |  "osd_lvm_uuid": "55f97389-0425-5b31-8593-f3b3ad53d7f9" 2026-03-19 02:24:19.450625 | orchestrator |  }, 2026-03-19 02:24:19.450633 | orchestrator |  "sdc": { 2026-03-19 02:24:19.450640 | orchestrator |  "osd_lvm_uuid": "432058d8-20d3-534b-84ac-2a35b6cfcd9e" 2026-03-19 02:24:19.450648 | orchestrator |  } 2026-03-19 02:24:19.450655 | orchestrator |  } 2026-03-19 02:24:19.450663 | orchestrator | } 2026-03-19 02:24:19.450671 | orchestrator | 2026-03-19 02:24:19.450679 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 02:24:19.450686 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.147) 0:00:12.280 ******** 2026-03-19 02:24:19.450693 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450701 | orchestrator | 2026-03-19 02:24:19.450708 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 02:24:19.450716 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.147) 0:00:12.427 ******** 2026-03-19 02:24:19.450723 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450731 | orchestrator | 2026-03-19 02:24:19.450755 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 02:24:19.450762 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.146) 0:00:12.574 ******** 2026-03-19 02:24:19.450768 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:24:19.450774 | orchestrator | 2026-03-19 02:24:19.450781 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 02:24:19.450788 | orchestrator | Thursday 19 March 2026 02:24:16 +0000 (0:00:00.152) 0:00:12.726 ******** 2026-03-19 02:24:19.450795 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 02:24:19.450802 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 02:24:19.450809 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:19.450816 | orchestrator |  "sdb": { 2026-03-19 02:24:19.450823 | orchestrator |  "osd_lvm_uuid": "55f97389-0425-5b31-8593-f3b3ad53d7f9" 2026-03-19 02:24:19.450830 | orchestrator |  }, 2026-03-19 02:24:19.450837 | orchestrator |  "sdc": { 2026-03-19 02:24:19.450845 | orchestrator |  "osd_lvm_uuid": "432058d8-20d3-534b-84ac-2a35b6cfcd9e" 2026-03-19 02:24:19.450852 | orchestrator |  } 2026-03-19 02:24:19.450859 | orchestrator |  }, 2026-03-19 02:24:19.450866 | orchestrator |  "lvm_volumes": [ 2026-03-19 02:24:19.450872 | orchestrator |  { 2026-03-19 02:24:19.450879 | orchestrator |  "data": "osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9", 2026-03-19 02:24:19.450886 | orchestrator |  "data_vg": "ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9" 2026-03-19 02:24:19.450893 | orchestrator |  }, 2026-03-19 02:24:19.450900 | orchestrator |  { 2026-03-19 02:24:19.450907 | orchestrator |  "data": "osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e", 2026-03-19 02:24:19.450920 | orchestrator |  "data_vg": "ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e" 2026-03-19 02:24:19.450927 | orchestrator |  } 2026-03-19 02:24:19.450934 | orchestrator |  ] 2026-03-19 02:24:19.450941 | orchestrator |  } 2026-03-19 02:24:19.450948 | orchestrator | } 2026-03-19 02:24:19.450955 | orchestrator | 2026-03-19 02:24:19.450962 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 02:24:19.450969 | orchestrator | Thursday 19 March 2026 02:24:17 +0000 (0:00:00.409) 0:00:13.135 ******** 2026-03-19 02:24:19.450976 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:19.450983 | orchestrator | 2026-03-19 02:24:19.450990 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 02:24:19.450996 | orchestrator | 2026-03-19 02:24:19.451003 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:24:19.451010 | orchestrator | Thursday 19 March 2026 02:24:18 +0000 (0:00:01.768) 0:00:14.903 ******** 2026-03-19 02:24:19.451017 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:19.451024 | orchestrator | 2026-03-19 02:24:19.451031 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:24:19.451039 | orchestrator | Thursday 19 March 2026 02:24:19 +0000 (0:00:00.269) 0:00:15.173 ******** 2026-03-19 02:24:19.451046 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:24:19.451052 | orchestrator | 2026-03-19 02:24:19.451065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616300 | orchestrator | Thursday 19 March 2026 02:24:19 +0000 (0:00:00.253) 0:00:15.427 ******** 2026-03-19 02:24:27.616422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-19 02:24:27.616439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-19 02:24:27.616451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-19 02:24:27.616480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-19 02:24:27.616492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-19 02:24:27.616503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-19 02:24:27.616514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-19 02:24:27.616525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-19 02:24:27.616536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-19 02:24:27.616547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-19 02:24:27.616558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-19 02:24:27.616569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-19 02:24:27.616579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-19 02:24:27.616591 | orchestrator | 2026-03-19 02:24:27.616603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616614 | orchestrator | Thursday 19 March 2026 02:24:19 +0000 (0:00:00.375) 0:00:15.802 ******** 2026-03-19 02:24:27.616625 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616636 | orchestrator | 2026-03-19 02:24:27.616647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616658 | orchestrator | Thursday 19 March 2026 02:24:20 +0000 (0:00:00.189) 0:00:15.991 ******** 2026-03-19 02:24:27.616669 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616679 | orchestrator | 2026-03-19 02:24:27.616690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616701 | orchestrator | Thursday 19 March 2026 02:24:20 +0000 (0:00:00.206) 0:00:16.197 ******** 2026-03-19 02:24:27.616769 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616786 | orchestrator | 2026-03-19 02:24:27.616805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616824 | orchestrator | Thursday 19 March 2026 02:24:20 +0000 (0:00:00.205) 0:00:16.403 ******** 2026-03-19 02:24:27.616844 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616863 | orchestrator | 2026-03-19 02:24:27.616877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616890 | orchestrator | Thursday 19 March 2026 02:24:21 +0000 (0:00:00.605) 0:00:17.009 ******** 2026-03-19 02:24:27.616902 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616912 | orchestrator | 2026-03-19 02:24:27.616923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616934 | orchestrator | Thursday 19 March 2026 02:24:21 +0000 (0:00:00.209) 0:00:17.218 ******** 2026-03-19 02:24:27.616944 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616955 | orchestrator | 2026-03-19 02:24:27.616966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.616976 | orchestrator | Thursday 19 March 2026 02:24:21 +0000 (0:00:00.216) 0:00:17.435 ******** 2026-03-19 02:24:27.616987 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.616998 | orchestrator | 2026-03-19 02:24:27.617009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617020 | orchestrator | Thursday 19 March 2026 02:24:21 +0000 (0:00:00.216) 0:00:17.651 ******** 2026-03-19 02:24:27.617030 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617041 | orchestrator | 2026-03-19 02:24:27.617052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617063 | orchestrator | Thursday 19 March 2026 02:24:21 +0000 (0:00:00.210) 0:00:17.862 ******** 2026-03-19 02:24:27.617073 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e) 2026-03-19 02:24:27.617085 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e) 2026-03-19 02:24:27.617097 | orchestrator | 2026-03-19 02:24:27.617108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617118 | orchestrator | Thursday 19 March 2026 02:24:22 +0000 (0:00:00.417) 0:00:18.280 ******** 2026-03-19 02:24:27.617129 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5) 2026-03-19 02:24:27.617140 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5) 2026-03-19 02:24:27.617151 | orchestrator | 2026-03-19 02:24:27.617161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617172 | orchestrator | Thursday 19 March 2026 02:24:22 +0000 (0:00:00.465) 0:00:18.745 ******** 2026-03-19 02:24:27.617183 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e) 2026-03-19 02:24:27.617194 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e) 2026-03-19 02:24:27.617204 | orchestrator | 2026-03-19 02:24:27.617215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617245 | orchestrator | Thursday 19 March 2026 02:24:23 +0000 (0:00:00.442) 0:00:19.188 ******** 2026-03-19 02:24:27.617256 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8) 2026-03-19 02:24:27.617267 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8) 2026-03-19 02:24:27.617278 | orchestrator | 2026-03-19 02:24:27.617288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:27.617306 | orchestrator | Thursday 19 March 2026 02:24:23 +0000 (0:00:00.438) 0:00:19.626 ******** 2026-03-19 02:24:27.617317 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:24:27.617337 | orchestrator | 2026-03-19 02:24:27.617348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617359 | orchestrator | Thursday 19 March 2026 02:24:23 +0000 (0:00:00.329) 0:00:19.956 ******** 2026-03-19 02:24:27.617369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-19 02:24:27.617380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-19 02:24:27.617391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-19 02:24:27.617401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-19 02:24:27.617412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-19 02:24:27.617423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-19 02:24:27.617434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-19 02:24:27.617444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-19 02:24:27.617455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-19 02:24:27.617465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-19 02:24:27.617477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-19 02:24:27.617487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-19 02:24:27.617498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-19 02:24:27.617509 | orchestrator | 2026-03-19 02:24:27.617520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617530 | orchestrator | Thursday 19 March 2026 02:24:24 +0000 (0:00:00.391) 0:00:20.348 ******** 2026-03-19 02:24:27.617541 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617552 | orchestrator | 2026-03-19 02:24:27.617563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617574 | orchestrator | Thursday 19 March 2026 02:24:24 +0000 (0:00:00.605) 0:00:20.954 ******** 2026-03-19 02:24:27.617584 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617595 | orchestrator | 2026-03-19 02:24:27.617606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617617 | orchestrator | Thursday 19 March 2026 02:24:25 +0000 (0:00:00.217) 0:00:21.172 ******** 2026-03-19 02:24:27.617628 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617638 | orchestrator | 2026-03-19 02:24:27.617649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617660 | orchestrator | Thursday 19 March 2026 02:24:25 +0000 (0:00:00.218) 0:00:21.390 ******** 2026-03-19 02:24:27.617671 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617681 | orchestrator | 2026-03-19 02:24:27.617692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617703 | orchestrator | Thursday 19 March 2026 02:24:25 +0000 (0:00:00.258) 0:00:21.649 ******** 2026-03-19 02:24:27.617714 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617724 | orchestrator | 2026-03-19 02:24:27.617778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617790 | orchestrator | Thursday 19 March 2026 02:24:25 +0000 (0:00:00.224) 0:00:21.874 ******** 2026-03-19 02:24:27.617801 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617821 | orchestrator | 2026-03-19 02:24:27.617839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617856 | orchestrator | Thursday 19 March 2026 02:24:26 +0000 (0:00:00.205) 0:00:22.079 ******** 2026-03-19 02:24:27.617875 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617905 | orchestrator | 2026-03-19 02:24:27.617924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.617943 | orchestrator | Thursday 19 March 2026 02:24:26 +0000 (0:00:00.207) 0:00:22.286 ******** 2026-03-19 02:24:27.617962 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:27.617981 | orchestrator | 2026-03-19 02:24:27.618000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.618091 | orchestrator | Thursday 19 March 2026 02:24:26 +0000 (0:00:00.222) 0:00:22.509 ******** 2026-03-19 02:24:27.618157 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-19 02:24:27.618178 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-19 02:24:27.618196 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-19 02:24:27.618215 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-19 02:24:27.618232 | orchestrator | 2026-03-19 02:24:27.618251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:27.618269 | orchestrator | Thursday 19 March 2026 02:24:27 +0000 (0:00:00.880) 0:00:23.389 ******** 2026-03-19 02:24:27.618287 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175620 | orchestrator | 2026-03-19 02:24:34.175704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:34.175711 | orchestrator | Thursday 19 March 2026 02:24:27 +0000 (0:00:00.206) 0:00:23.596 ******** 2026-03-19 02:24:34.175716 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175721 | orchestrator | 2026-03-19 02:24:34.175769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:34.175775 | orchestrator | Thursday 19 March 2026 02:24:27 +0000 (0:00:00.206) 0:00:23.802 ******** 2026-03-19 02:24:34.175794 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175798 | orchestrator | 2026-03-19 02:24:34.175802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:34.175806 | orchestrator | Thursday 19 March 2026 02:24:28 +0000 (0:00:00.668) 0:00:24.471 ******** 2026-03-19 02:24:34.175810 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175814 | orchestrator | 2026-03-19 02:24:34.175817 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 02:24:34.175821 | orchestrator | Thursday 19 March 2026 02:24:28 +0000 (0:00:00.199) 0:00:24.670 ******** 2026-03-19 02:24:34.175825 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-19 02:24:34.175829 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-19 02:24:34.175833 | orchestrator | 2026-03-19 02:24:34.175837 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 02:24:34.175841 | orchestrator | Thursday 19 March 2026 02:24:28 +0000 (0:00:00.175) 0:00:24.846 ******** 2026-03-19 02:24:34.175844 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175848 | orchestrator | 2026-03-19 02:24:34.175852 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 02:24:34.175855 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.146) 0:00:24.992 ******** 2026-03-19 02:24:34.175859 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175863 | orchestrator | 2026-03-19 02:24:34.175867 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 02:24:34.175870 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.155) 0:00:25.147 ******** 2026-03-19 02:24:34.175874 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175878 | orchestrator | 2026-03-19 02:24:34.175882 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 02:24:34.175885 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.153) 0:00:25.301 ******** 2026-03-19 02:24:34.175889 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:24:34.175894 | orchestrator | 2026-03-19 02:24:34.175897 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 02:24:34.175901 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.145) 0:00:25.446 ******** 2026-03-19 02:24:34.175923 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b653c337-740c-52f4-bc46-3e8e37039a81'}}) 2026-03-19 02:24:34.175927 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}}) 2026-03-19 02:24:34.175932 | orchestrator | 2026-03-19 02:24:34.175935 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 02:24:34.175939 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.169) 0:00:25.615 ******** 2026-03-19 02:24:34.175943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b653c337-740c-52f4-bc46-3e8e37039a81'}})  2026-03-19 02:24:34.175949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}})  2026-03-19 02:24:34.175953 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175957 | orchestrator | 2026-03-19 02:24:34.175961 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 02:24:34.175964 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.164) 0:00:25.779 ******** 2026-03-19 02:24:34.175968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b653c337-740c-52f4-bc46-3e8e37039a81'}})  2026-03-19 02:24:34.175972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}})  2026-03-19 02:24:34.175975 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.175979 | orchestrator | 2026-03-19 02:24:34.175983 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 02:24:34.175987 | orchestrator | Thursday 19 March 2026 02:24:29 +0000 (0:00:00.169) 0:00:25.948 ******** 2026-03-19 02:24:34.175990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b653c337-740c-52f4-bc46-3e8e37039a81'}})  2026-03-19 02:24:34.175994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}})  2026-03-19 02:24:34.175998 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176001 | orchestrator | 2026-03-19 02:24:34.176005 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 02:24:34.176009 | orchestrator | Thursday 19 March 2026 02:24:30 +0000 (0:00:00.183) 0:00:26.132 ******** 2026-03-19 02:24:34.176013 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:24:34.176016 | orchestrator | 2026-03-19 02:24:34.176020 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 02:24:34.176024 | orchestrator | Thursday 19 March 2026 02:24:30 +0000 (0:00:00.140) 0:00:26.272 ******** 2026-03-19 02:24:34.176027 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:24:34.176031 | orchestrator | 2026-03-19 02:24:34.176035 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 02:24:34.176038 | orchestrator | Thursday 19 March 2026 02:24:30 +0000 (0:00:00.148) 0:00:26.421 ******** 2026-03-19 02:24:34.176052 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176056 | orchestrator | 2026-03-19 02:24:34.176060 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 02:24:34.176063 | orchestrator | Thursday 19 March 2026 02:24:30 +0000 (0:00:00.365) 0:00:26.787 ******** 2026-03-19 02:24:34.176067 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176071 | orchestrator | 2026-03-19 02:24:34.176074 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 02:24:34.176078 | orchestrator | Thursday 19 March 2026 02:24:30 +0000 (0:00:00.140) 0:00:26.928 ******** 2026-03-19 02:24:34.176085 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176089 | orchestrator | 2026-03-19 02:24:34.176093 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 02:24:34.176097 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.141) 0:00:27.069 ******** 2026-03-19 02:24:34.176108 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:24:34.176112 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:34.176116 | orchestrator |  "sdb": { 2026-03-19 02:24:34.176120 | orchestrator |  "osd_lvm_uuid": "b653c337-740c-52f4-bc46-3e8e37039a81" 2026-03-19 02:24:34.176124 | orchestrator |  }, 2026-03-19 02:24:34.176128 | orchestrator |  "sdc": { 2026-03-19 02:24:34.176132 | orchestrator |  "osd_lvm_uuid": "a2eacdaa-bff5-5a13-b9a9-6af0c62255c8" 2026-03-19 02:24:34.176135 | orchestrator |  } 2026-03-19 02:24:34.176139 | orchestrator |  } 2026-03-19 02:24:34.176143 | orchestrator | } 2026-03-19 02:24:34.176147 | orchestrator | 2026-03-19 02:24:34.176151 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 02:24:34.176155 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.162) 0:00:27.232 ******** 2026-03-19 02:24:34.176158 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176162 | orchestrator | 2026-03-19 02:24:34.176166 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 02:24:34.176169 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.142) 0:00:27.374 ******** 2026-03-19 02:24:34.176173 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176177 | orchestrator | 2026-03-19 02:24:34.176181 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 02:24:34.176185 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.159) 0:00:27.533 ******** 2026-03-19 02:24:34.176190 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:24:34.176194 | orchestrator | 2026-03-19 02:24:34.176198 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 02:24:34.176202 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.142) 0:00:27.676 ******** 2026-03-19 02:24:34.176207 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 02:24:34.176211 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 02:24:34.176215 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:34.176220 | orchestrator |  "sdb": { 2026-03-19 02:24:34.176224 | orchestrator |  "osd_lvm_uuid": "b653c337-740c-52f4-bc46-3e8e37039a81" 2026-03-19 02:24:34.176228 | orchestrator |  }, 2026-03-19 02:24:34.176232 | orchestrator |  "sdc": { 2026-03-19 02:24:34.176237 | orchestrator |  "osd_lvm_uuid": "a2eacdaa-bff5-5a13-b9a9-6af0c62255c8" 2026-03-19 02:24:34.176241 | orchestrator |  } 2026-03-19 02:24:34.176245 | orchestrator |  }, 2026-03-19 02:24:34.176250 | orchestrator |  "lvm_volumes": [ 2026-03-19 02:24:34.176254 | orchestrator |  { 2026-03-19 02:24:34.176259 | orchestrator |  "data": "osd-block-b653c337-740c-52f4-bc46-3e8e37039a81", 2026-03-19 02:24:34.176263 | orchestrator |  "data_vg": "ceph-b653c337-740c-52f4-bc46-3e8e37039a81" 2026-03-19 02:24:34.176270 | orchestrator |  }, 2026-03-19 02:24:34.176276 | orchestrator |  { 2026-03-19 02:24:34.176282 | orchestrator |  "data": "osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8", 2026-03-19 02:24:34.176288 | orchestrator |  "data_vg": "ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8" 2026-03-19 02:24:34.176294 | orchestrator |  } 2026-03-19 02:24:34.176299 | orchestrator |  ] 2026-03-19 02:24:34.176305 | orchestrator |  } 2026-03-19 02:24:34.176311 | orchestrator | } 2026-03-19 02:24:34.176317 | orchestrator | 2026-03-19 02:24:34.176323 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 02:24:34.176328 | orchestrator | Thursday 19 March 2026 02:24:31 +0000 (0:00:00.212) 0:00:27.889 ******** 2026-03-19 02:24:34.176334 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:34.176339 | orchestrator | 2026-03-19 02:24:34.176345 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-19 02:24:34.176351 | orchestrator | 2026-03-19 02:24:34.176357 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:24:34.176363 | orchestrator | Thursday 19 March 2026 02:24:33 +0000 (0:00:01.369) 0:00:29.259 ******** 2026-03-19 02:24:34.176375 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:34.176381 | orchestrator | 2026-03-19 02:24:34.176388 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:24:34.176394 | orchestrator | Thursday 19 March 2026 02:24:33 +0000 (0:00:00.257) 0:00:29.517 ******** 2026-03-19 02:24:34.176399 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:24:34.176405 | orchestrator | 2026-03-19 02:24:34.176411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:34.176417 | orchestrator | Thursday 19 March 2026 02:24:33 +0000 (0:00:00.247) 0:00:29.764 ******** 2026-03-19 02:24:34.176423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-19 02:24:34.176429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-19 02:24:34.176437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-19 02:24:34.176441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-19 02:24:34.176444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-19 02:24:34.176453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-19 02:24:42.545783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-19 02:24:42.545872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-19 02:24:42.545878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-19 02:24:42.545883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-19 02:24:42.545902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-19 02:24:42.545906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-19 02:24:42.545910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-19 02:24:42.545914 | orchestrator | 2026-03-19 02:24:42.545919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.545924 | orchestrator | Thursday 19 March 2026 02:24:34 +0000 (0:00:00.387) 0:00:30.152 ******** 2026-03-19 02:24:42.545928 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.545933 | orchestrator | 2026-03-19 02:24:42.545937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.545941 | orchestrator | Thursday 19 March 2026 02:24:34 +0000 (0:00:00.212) 0:00:30.364 ******** 2026-03-19 02:24:42.545945 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.545949 | orchestrator | 2026-03-19 02:24:42.545952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.545956 | orchestrator | Thursday 19 March 2026 02:24:34 +0000 (0:00:00.213) 0:00:30.578 ******** 2026-03-19 02:24:42.545960 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.545964 | orchestrator | 2026-03-19 02:24:42.545968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.545971 | orchestrator | Thursday 19 March 2026 02:24:34 +0000 (0:00:00.225) 0:00:30.803 ******** 2026-03-19 02:24:42.545975 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.545979 | orchestrator | 2026-03-19 02:24:42.545983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.545987 | orchestrator | Thursday 19 March 2026 02:24:35 +0000 (0:00:00.221) 0:00:31.025 ******** 2026-03-19 02:24:42.545991 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.545995 | orchestrator | 2026-03-19 02:24:42.545998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546002 | orchestrator | Thursday 19 March 2026 02:24:35 +0000 (0:00:00.212) 0:00:31.237 ******** 2026-03-19 02:24:42.546060 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546065 | orchestrator | 2026-03-19 02:24:42.546069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546073 | orchestrator | Thursday 19 March 2026 02:24:35 +0000 (0:00:00.207) 0:00:31.445 ******** 2026-03-19 02:24:42.546076 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546080 | orchestrator | 2026-03-19 02:24:42.546084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546088 | orchestrator | Thursday 19 March 2026 02:24:36 +0000 (0:00:00.662) 0:00:32.108 ******** 2026-03-19 02:24:42.546091 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546095 | orchestrator | 2026-03-19 02:24:42.546099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546103 | orchestrator | Thursday 19 March 2026 02:24:36 +0000 (0:00:00.211) 0:00:32.319 ******** 2026-03-19 02:24:42.546107 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77) 2026-03-19 02:24:42.546112 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77) 2026-03-19 02:24:42.546116 | orchestrator | 2026-03-19 02:24:42.546119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546123 | orchestrator | Thursday 19 March 2026 02:24:36 +0000 (0:00:00.438) 0:00:32.757 ******** 2026-03-19 02:24:42.546127 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97) 2026-03-19 02:24:42.546131 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97) 2026-03-19 02:24:42.546134 | orchestrator | 2026-03-19 02:24:42.546138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546142 | orchestrator | Thursday 19 March 2026 02:24:37 +0000 (0:00:00.424) 0:00:33.181 ******** 2026-03-19 02:24:42.546146 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff) 2026-03-19 02:24:42.546150 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff) 2026-03-19 02:24:42.546154 | orchestrator | 2026-03-19 02:24:42.546157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546161 | orchestrator | Thursday 19 March 2026 02:24:37 +0000 (0:00:00.451) 0:00:33.633 ******** 2026-03-19 02:24:42.546165 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906) 2026-03-19 02:24:42.546169 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906) 2026-03-19 02:24:42.546173 | orchestrator | 2026-03-19 02:24:42.546177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:24:42.546180 | orchestrator | Thursday 19 March 2026 02:24:38 +0000 (0:00:00.454) 0:00:34.088 ******** 2026-03-19 02:24:42.546184 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:24:42.546188 | orchestrator | 2026-03-19 02:24:42.546192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546207 | orchestrator | Thursday 19 March 2026 02:24:38 +0000 (0:00:00.348) 0:00:34.436 ******** 2026-03-19 02:24:42.546211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-19 02:24:42.546215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-19 02:24:42.546219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-19 02:24:42.546226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-19 02:24:42.546230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-19 02:24:42.546233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-19 02:24:42.546241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-19 02:24:42.546245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-19 02:24:42.546248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-19 02:24:42.546252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-19 02:24:42.546256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-19 02:24:42.546259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-19 02:24:42.546263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-19 02:24:42.546267 | orchestrator | 2026-03-19 02:24:42.546271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546274 | orchestrator | Thursday 19 March 2026 02:24:38 +0000 (0:00:00.391) 0:00:34.827 ******** 2026-03-19 02:24:42.546278 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546282 | orchestrator | 2026-03-19 02:24:42.546286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546290 | orchestrator | Thursday 19 March 2026 02:24:39 +0000 (0:00:00.224) 0:00:35.052 ******** 2026-03-19 02:24:42.546295 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546299 | orchestrator | 2026-03-19 02:24:42.546303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546308 | orchestrator | Thursday 19 March 2026 02:24:39 +0000 (0:00:00.194) 0:00:35.246 ******** 2026-03-19 02:24:42.546312 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546316 | orchestrator | 2026-03-19 02:24:42.546321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546325 | orchestrator | Thursday 19 March 2026 02:24:39 +0000 (0:00:00.645) 0:00:35.892 ******** 2026-03-19 02:24:42.546329 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546334 | orchestrator | 2026-03-19 02:24:42.546338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546342 | orchestrator | Thursday 19 March 2026 02:24:40 +0000 (0:00:00.203) 0:00:36.095 ******** 2026-03-19 02:24:42.546346 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546351 | orchestrator | 2026-03-19 02:24:42.546355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546359 | orchestrator | Thursday 19 March 2026 02:24:40 +0000 (0:00:00.204) 0:00:36.300 ******** 2026-03-19 02:24:42.546363 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546367 | orchestrator | 2026-03-19 02:24:42.546372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546376 | orchestrator | Thursday 19 March 2026 02:24:40 +0000 (0:00:00.237) 0:00:36.537 ******** 2026-03-19 02:24:42.546381 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546385 | orchestrator | 2026-03-19 02:24:42.546389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546404 | orchestrator | Thursday 19 March 2026 02:24:40 +0000 (0:00:00.225) 0:00:36.762 ******** 2026-03-19 02:24:42.546408 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546418 | orchestrator | 2026-03-19 02:24:42.546423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546427 | orchestrator | Thursday 19 March 2026 02:24:40 +0000 (0:00:00.218) 0:00:36.980 ******** 2026-03-19 02:24:42.546431 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-19 02:24:42.546436 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-19 02:24:42.546440 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-19 02:24:42.546444 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-19 02:24:42.546449 | orchestrator | 2026-03-19 02:24:42.546456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546461 | orchestrator | Thursday 19 March 2026 02:24:41 +0000 (0:00:00.694) 0:00:37.674 ******** 2026-03-19 02:24:42.546465 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546469 | orchestrator | 2026-03-19 02:24:42.546474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546478 | orchestrator | Thursday 19 March 2026 02:24:41 +0000 (0:00:00.214) 0:00:37.888 ******** 2026-03-19 02:24:42.546482 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546487 | orchestrator | 2026-03-19 02:24:42.546491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546495 | orchestrator | Thursday 19 March 2026 02:24:42 +0000 (0:00:00.210) 0:00:38.099 ******** 2026-03-19 02:24:42.546499 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546504 | orchestrator | 2026-03-19 02:24:42.546508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:24:42.546512 | orchestrator | Thursday 19 March 2026 02:24:42 +0000 (0:00:00.213) 0:00:38.313 ******** 2026-03-19 02:24:42.546517 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:42.546521 | orchestrator | 2026-03-19 02:24:42.546528 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-19 02:24:46.973443 | orchestrator | Thursday 19 March 2026 02:24:42 +0000 (0:00:00.209) 0:00:38.523 ******** 2026-03-19 02:24:46.973519 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-19 02:24:46.973524 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-19 02:24:46.973529 | orchestrator | 2026-03-19 02:24:46.973534 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-19 02:24:46.973553 | orchestrator | Thursday 19 March 2026 02:24:42 +0000 (0:00:00.407) 0:00:38.930 ******** 2026-03-19 02:24:46.973557 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973562 | orchestrator | 2026-03-19 02:24:46.973566 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-19 02:24:46.973570 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.146) 0:00:39.077 ******** 2026-03-19 02:24:46.973583 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973588 | orchestrator | 2026-03-19 02:24:46.973591 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-19 02:24:46.973595 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.188) 0:00:39.265 ******** 2026-03-19 02:24:46.973605 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973609 | orchestrator | 2026-03-19 02:24:46.973613 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-19 02:24:46.973617 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.148) 0:00:39.413 ******** 2026-03-19 02:24:46.973620 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:24:46.973625 | orchestrator | 2026-03-19 02:24:46.973629 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-19 02:24:46.973633 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.136) 0:00:39.550 ******** 2026-03-19 02:24:46.973637 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ab7a01d4-aa20-5ffe-8eee-b634151ce758'}}) 2026-03-19 02:24:46.973642 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb497169-2d92-5217-a604-0fdb844d53ba'}}) 2026-03-19 02:24:46.973646 | orchestrator | 2026-03-19 02:24:46.973653 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-19 02:24:46.973659 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.173) 0:00:39.724 ******** 2026-03-19 02:24:46.973666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ab7a01d4-aa20-5ffe-8eee-b634151ce758'}})  2026-03-19 02:24:46.973674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb497169-2d92-5217-a604-0fdb844d53ba'}})  2026-03-19 02:24:46.973680 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973705 | orchestrator | 2026-03-19 02:24:46.973711 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-19 02:24:46.973733 | orchestrator | Thursday 19 March 2026 02:24:43 +0000 (0:00:00.156) 0:00:39.880 ******** 2026-03-19 02:24:46.973740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ab7a01d4-aa20-5ffe-8eee-b634151ce758'}})  2026-03-19 02:24:46.973746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb497169-2d92-5217-a604-0fdb844d53ba'}})  2026-03-19 02:24:46.973752 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973758 | orchestrator | 2026-03-19 02:24:46.973764 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-19 02:24:46.973770 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.154) 0:00:40.034 ******** 2026-03-19 02:24:46.973776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ab7a01d4-aa20-5ffe-8eee-b634151ce758'}})  2026-03-19 02:24:46.973783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb497169-2d92-5217-a604-0fdb844d53ba'}})  2026-03-19 02:24:46.973789 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973795 | orchestrator | 2026-03-19 02:24:46.973801 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-19 02:24:46.973807 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.155) 0:00:40.189 ******** 2026-03-19 02:24:46.973813 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:24:46.973818 | orchestrator | 2026-03-19 02:24:46.973824 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-19 02:24:46.973829 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.148) 0:00:40.338 ******** 2026-03-19 02:24:46.973835 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:24:46.973840 | orchestrator | 2026-03-19 02:24:46.973845 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-19 02:24:46.973851 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.148) 0:00:40.487 ******** 2026-03-19 02:24:46.973856 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973862 | orchestrator | 2026-03-19 02:24:46.973868 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-19 02:24:46.973874 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.344) 0:00:40.831 ******** 2026-03-19 02:24:46.973879 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973885 | orchestrator | 2026-03-19 02:24:46.973891 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-19 02:24:46.973897 | orchestrator | Thursday 19 March 2026 02:24:44 +0000 (0:00:00.146) 0:00:40.978 ******** 2026-03-19 02:24:46.973903 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.973909 | orchestrator | 2026-03-19 02:24:46.973914 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-19 02:24:46.973920 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.140) 0:00:41.118 ******** 2026-03-19 02:24:46.973926 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:24:46.973931 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:46.973937 | orchestrator |  "sdb": { 2026-03-19 02:24:46.973958 | orchestrator |  "osd_lvm_uuid": "ab7a01d4-aa20-5ffe-8eee-b634151ce758" 2026-03-19 02:24:46.973965 | orchestrator |  }, 2026-03-19 02:24:46.973971 | orchestrator |  "sdc": { 2026-03-19 02:24:46.973977 | orchestrator |  "osd_lvm_uuid": "eb497169-2d92-5217-a604-0fdb844d53ba" 2026-03-19 02:24:46.973983 | orchestrator |  } 2026-03-19 02:24:46.973989 | orchestrator |  } 2026-03-19 02:24:46.973995 | orchestrator | } 2026-03-19 02:24:46.974001 | orchestrator | 2026-03-19 02:24:46.974008 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-19 02:24:46.974065 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.153) 0:00:41.271 ******** 2026-03-19 02:24:46.974074 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.974090 | orchestrator | 2026-03-19 02:24:46.974097 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-19 02:24:46.974104 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.143) 0:00:41.415 ******** 2026-03-19 02:24:46.974110 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.974117 | orchestrator | 2026-03-19 02:24:46.974123 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-19 02:24:46.974130 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.139) 0:00:41.555 ******** 2026-03-19 02:24:46.974136 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:24:46.974142 | orchestrator | 2026-03-19 02:24:46.974149 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-19 02:24:46.974155 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.140) 0:00:41.695 ******** 2026-03-19 02:24:46.974162 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 02:24:46.974168 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-19 02:24:46.974175 | orchestrator |  "ceph_osd_devices": { 2026-03-19 02:24:46.974181 | orchestrator |  "sdb": { 2026-03-19 02:24:46.974187 | orchestrator |  "osd_lvm_uuid": "ab7a01d4-aa20-5ffe-8eee-b634151ce758" 2026-03-19 02:24:46.974194 | orchestrator |  }, 2026-03-19 02:24:46.974200 | orchestrator |  "sdc": { 2026-03-19 02:24:46.974207 | orchestrator |  "osd_lvm_uuid": "eb497169-2d92-5217-a604-0fdb844d53ba" 2026-03-19 02:24:46.974213 | orchestrator |  } 2026-03-19 02:24:46.974220 | orchestrator |  }, 2026-03-19 02:24:46.974226 | orchestrator |  "lvm_volumes": [ 2026-03-19 02:24:46.974232 | orchestrator |  { 2026-03-19 02:24:46.974239 | orchestrator |  "data": "osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758", 2026-03-19 02:24:46.974245 | orchestrator |  "data_vg": "ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758" 2026-03-19 02:24:46.974251 | orchestrator |  }, 2026-03-19 02:24:46.974257 | orchestrator |  { 2026-03-19 02:24:46.974263 | orchestrator |  "data": "osd-block-eb497169-2d92-5217-a604-0fdb844d53ba", 2026-03-19 02:24:46.974270 | orchestrator |  "data_vg": "ceph-eb497169-2d92-5217-a604-0fdb844d53ba" 2026-03-19 02:24:46.974276 | orchestrator |  } 2026-03-19 02:24:46.974282 | orchestrator |  ] 2026-03-19 02:24:46.974289 | orchestrator |  } 2026-03-19 02:24:46.974296 | orchestrator | } 2026-03-19 02:24:46.974303 | orchestrator | 2026-03-19 02:24:46.974309 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-19 02:24:46.974315 | orchestrator | Thursday 19 March 2026 02:24:45 +0000 (0:00:00.219) 0:00:41.915 ******** 2026-03-19 02:24:46.974322 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 02:24:46.974329 | orchestrator | 2026-03-19 02:24:46.974335 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:24:46.974341 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:24:46.974348 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:24:46.974354 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:24:46.974360 | orchestrator | 2026-03-19 02:24:46.974366 | orchestrator | 2026-03-19 02:24:46.974373 | orchestrator | 2026-03-19 02:24:46.974378 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:24:46.974384 | orchestrator | Thursday 19 March 2026 02:24:46 +0000 (0:00:01.021) 0:00:42.937 ******** 2026-03-19 02:24:46.974389 | orchestrator | =============================================================================== 2026-03-19 02:24:46.974395 | orchestrator | Write configuration file ------------------------------------------------ 4.16s 2026-03-19 02:24:46.974401 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-03-19 02:24:46.974414 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-03-19 02:24:46.974421 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-03-19 02:24:46.974428 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-03-19 02:24:46.974434 | orchestrator | Set DB devices config data ---------------------------------------------- 0.86s 2026-03-19 02:24:46.974441 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-03-19 02:24:46.974447 | orchestrator | Print configuration data ------------------------------------------------ 0.84s 2026-03-19 02:24:46.974454 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-19 02:24:46.974460 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.76s 2026-03-19 02:24:46.974467 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-03-19 02:24:46.974474 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-19 02:24:46.974480 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-19 02:24:46.974494 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-19 02:24:47.400574 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-19 02:24:47.400662 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.66s 2026-03-19 02:24:47.400670 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-03-19 02:24:47.400694 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-19 02:24:47.400700 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-19 02:24:47.400706 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-19 02:25:10.023944 | orchestrator | 2026-03-19 02:25:10 | INFO  | Task 52bef057-2dc3-45f5-9f65-f3c96b15ad71 (sync inventory) is running in background. Output coming soon. 2026-03-19 02:25:38.232742 | orchestrator | 2026-03-19 02:25:11 | INFO  | Starting group_vars file reorganization 2026-03-19 02:25:38.232862 | orchestrator | 2026-03-19 02:25:11 | INFO  | Moved 0 file(s) to their respective directories 2026-03-19 02:25:38.232880 | orchestrator | 2026-03-19 02:25:11 | INFO  | Group_vars file reorganization completed 2026-03-19 02:25:38.232893 | orchestrator | 2026-03-19 02:25:14 | INFO  | Starting variable preparation from inventory 2026-03-19 02:25:38.232903 | orchestrator | 2026-03-19 02:25:17 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-19 02:25:38.232914 | orchestrator | 2026-03-19 02:25:17 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-19 02:25:38.232924 | orchestrator | 2026-03-19 02:25:17 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-19 02:25:38.232934 | orchestrator | 2026-03-19 02:25:17 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-19 02:25:38.232945 | orchestrator | 2026-03-19 02:25:17 | INFO  | Variable preparation completed 2026-03-19 02:25:38.232956 | orchestrator | 2026-03-19 02:25:19 | INFO  | Starting inventory overwrite handling 2026-03-19 02:25:38.232967 | orchestrator | 2026-03-19 02:25:19 | INFO  | Handling group overwrites in 99-overwrite 2026-03-19 02:25:38.232978 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removing group frr:children from 60-generic 2026-03-19 02:25:38.232990 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-19 02:25:38.233001 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-19 02:25:38.233046 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-19 02:25:38.233057 | orchestrator | 2026-03-19 02:25:19 | INFO  | Handling group overwrites in 20-roles 2026-03-19 02:25:38.233068 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-19 02:25:38.233079 | orchestrator | 2026-03-19 02:25:19 | INFO  | Removed 5 group(s) in total 2026-03-19 02:25:38.233090 | orchestrator | 2026-03-19 02:25:19 | INFO  | Inventory overwrite handling completed 2026-03-19 02:25:38.233101 | orchestrator | 2026-03-19 02:25:20 | INFO  | Starting merge of inventory files 2026-03-19 02:25:38.233112 | orchestrator | 2026-03-19 02:25:20 | INFO  | Inventory files merged successfully 2026-03-19 02:25:38.233123 | orchestrator | 2026-03-19 02:25:25 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-19 02:25:38.233134 | orchestrator | 2026-03-19 02:25:36 | INFO  | Successfully wrote ClusterShell configuration 2026-03-19 02:25:38.233146 | orchestrator | [master 1a5f135] 2026-03-19-02-25 2026-03-19 02:25:38.233158 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-19 02:25:40.496194 | orchestrator | 2026-03-19 02:25:40 | INFO  | Task 3bb23e72-8c05-413c-918e-2a2f29d4dedc (ceph-create-lvm-devices) was prepared for execution. 2026-03-19 02:25:40.496281 | orchestrator | 2026-03-19 02:25:40 | INFO  | It takes a moment until task 3bb23e72-8c05-413c-918e-2a2f29d4dedc (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-19 02:25:52.239639 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 02:25:52.239751 | orchestrator | 2.16.14 2026-03-19 02:25:52.239761 | orchestrator | 2026-03-19 02:25:52.239767 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 02:25:52.239773 | orchestrator | 2026-03-19 02:25:52.239778 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:25:52.239783 | orchestrator | Thursday 19 March 2026 02:25:44 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-03-19 02:25:52.239788 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 02:25:52.239793 | orchestrator | 2026-03-19 02:25:52.239798 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:25:52.239802 | orchestrator | Thursday 19 March 2026 02:25:45 +0000 (0:00:00.242) 0:00:00.548 ******** 2026-03-19 02:25:52.239807 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:25:52.239812 | orchestrator | 2026-03-19 02:25:52.239817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.239822 | orchestrator | Thursday 19 March 2026 02:25:45 +0000 (0:00:00.231) 0:00:00.780 ******** 2026-03-19 02:25:52.239826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-19 02:25:52.239831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-19 02:25:52.239849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-19 02:25:52.239854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-19 02:25:52.239859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-19 02:25:52.239863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-19 02:25:52.239868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-19 02:25:52.239873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-19 02:25:52.239878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-19 02:25:52.239882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-19 02:25:52.239905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-19 02:25:52.239910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-19 02:25:52.239914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-19 02:25:52.239919 | orchestrator | 2026-03-19 02:25:52.239923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.239928 | orchestrator | Thursday 19 March 2026 02:25:45 +0000 (0:00:00.529) 0:00:01.309 ******** 2026-03-19 02:25:52.239932 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.239937 | orchestrator | 2026-03-19 02:25:52.239942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.239946 | orchestrator | Thursday 19 March 2026 02:25:46 +0000 (0:00:00.202) 0:00:01.512 ******** 2026-03-19 02:25:52.239951 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.239955 | orchestrator | 2026-03-19 02:25:52.239960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.239964 | orchestrator | Thursday 19 March 2026 02:25:46 +0000 (0:00:00.219) 0:00:01.731 ******** 2026-03-19 02:25:52.239969 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.239973 | orchestrator | 2026-03-19 02:25:52.239978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.239982 | orchestrator | Thursday 19 March 2026 02:25:46 +0000 (0:00:00.206) 0:00:01.938 ******** 2026-03-19 02:25:52.239987 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.239992 | orchestrator | 2026-03-19 02:25:52.239996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240001 | orchestrator | Thursday 19 March 2026 02:25:46 +0000 (0:00:00.209) 0:00:02.148 ******** 2026-03-19 02:25:52.240005 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240010 | orchestrator | 2026-03-19 02:25:52.240014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240020 | orchestrator | Thursday 19 March 2026 02:25:46 +0000 (0:00:00.213) 0:00:02.361 ******** 2026-03-19 02:25:52.240024 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240029 | orchestrator | 2026-03-19 02:25:52.240034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240038 | orchestrator | Thursday 19 March 2026 02:25:47 +0000 (0:00:00.180) 0:00:02.542 ******** 2026-03-19 02:25:52.240043 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240047 | orchestrator | 2026-03-19 02:25:52.240052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240056 | orchestrator | Thursday 19 March 2026 02:25:47 +0000 (0:00:00.205) 0:00:02.748 ******** 2026-03-19 02:25:52.240061 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240066 | orchestrator | 2026-03-19 02:25:52.240070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240075 | orchestrator | Thursday 19 March 2026 02:25:47 +0000 (0:00:00.226) 0:00:02.974 ******** 2026-03-19 02:25:52.240079 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2) 2026-03-19 02:25:52.240085 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2) 2026-03-19 02:25:52.240090 | orchestrator | 2026-03-19 02:25:52.240095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240110 | orchestrator | Thursday 19 March 2026 02:25:47 +0000 (0:00:00.417) 0:00:03.392 ******** 2026-03-19 02:25:52.240115 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d) 2026-03-19 02:25:52.240119 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d) 2026-03-19 02:25:52.240124 | orchestrator | 2026-03-19 02:25:52.240129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240137 | orchestrator | Thursday 19 March 2026 02:25:48 +0000 (0:00:00.620) 0:00:04.013 ******** 2026-03-19 02:25:52.240142 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1) 2026-03-19 02:25:52.240147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1) 2026-03-19 02:25:52.240151 | orchestrator | 2026-03-19 02:25:52.240156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240160 | orchestrator | Thursday 19 March 2026 02:25:49 +0000 (0:00:00.622) 0:00:04.635 ******** 2026-03-19 02:25:52.240165 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422) 2026-03-19 02:25:52.240169 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422) 2026-03-19 02:25:52.240174 | orchestrator | 2026-03-19 02:25:52.240182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:25:52.240187 | orchestrator | Thursday 19 March 2026 02:25:49 +0000 (0:00:00.818) 0:00:05.454 ******** 2026-03-19 02:25:52.240192 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:25:52.240196 | orchestrator | 2026-03-19 02:25:52.240202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240207 | orchestrator | Thursday 19 March 2026 02:25:50 +0000 (0:00:00.348) 0:00:05.802 ******** 2026-03-19 02:25:52.240212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-19 02:25:52.240217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-19 02:25:52.240223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-19 02:25:52.240228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-19 02:25:52.240233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-19 02:25:52.240238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-19 02:25:52.240244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-19 02:25:52.240249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-19 02:25:52.240254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-19 02:25:52.240259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-19 02:25:52.240264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-19 02:25:52.240269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-19 02:25:52.240274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-19 02:25:52.240279 | orchestrator | 2026-03-19 02:25:52.240285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240290 | orchestrator | Thursday 19 March 2026 02:25:50 +0000 (0:00:00.423) 0:00:06.226 ******** 2026-03-19 02:25:52.240295 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240300 | orchestrator | 2026-03-19 02:25:52.240305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240311 | orchestrator | Thursday 19 March 2026 02:25:50 +0000 (0:00:00.205) 0:00:06.431 ******** 2026-03-19 02:25:52.240316 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240321 | orchestrator | 2026-03-19 02:25:52.240326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240332 | orchestrator | Thursday 19 March 2026 02:25:51 +0000 (0:00:00.229) 0:00:06.661 ******** 2026-03-19 02:25:52.240337 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240346 | orchestrator | 2026-03-19 02:25:52.240351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240356 | orchestrator | Thursday 19 March 2026 02:25:51 +0000 (0:00:00.195) 0:00:06.857 ******** 2026-03-19 02:25:52.240362 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240367 | orchestrator | 2026-03-19 02:25:52.240372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240377 | orchestrator | Thursday 19 March 2026 02:25:51 +0000 (0:00:00.204) 0:00:07.061 ******** 2026-03-19 02:25:52.240383 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240388 | orchestrator | 2026-03-19 02:25:52.240393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240398 | orchestrator | Thursday 19 March 2026 02:25:51 +0000 (0:00:00.217) 0:00:07.279 ******** 2026-03-19 02:25:52.240403 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240408 | orchestrator | 2026-03-19 02:25:52.240413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:25:52.240419 | orchestrator | Thursday 19 March 2026 02:25:52 +0000 (0:00:00.246) 0:00:07.525 ******** 2026-03-19 02:25:52.240424 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:25:52.240429 | orchestrator | 2026-03-19 02:25:52.240436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508499 | orchestrator | Thursday 19 March 2026 02:25:52 +0000 (0:00:00.202) 0:00:07.728 ******** 2026-03-19 02:26:00.508619 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508632 | orchestrator | 2026-03-19 02:26:00.508639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508646 | orchestrator | Thursday 19 March 2026 02:25:52 +0000 (0:00:00.616) 0:00:08.344 ******** 2026-03-19 02:26:00.508653 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-19 02:26:00.508659 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-19 02:26:00.508666 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-19 02:26:00.508691 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-19 02:26:00.508697 | orchestrator | 2026-03-19 02:26:00.508703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508709 | orchestrator | Thursday 19 March 2026 02:25:53 +0000 (0:00:00.719) 0:00:09.063 ******** 2026-03-19 02:26:00.508715 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508721 | orchestrator | 2026-03-19 02:26:00.508727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508734 | orchestrator | Thursday 19 March 2026 02:25:53 +0000 (0:00:00.225) 0:00:09.288 ******** 2026-03-19 02:26:00.508739 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508745 | orchestrator | 2026-03-19 02:26:00.508766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508781 | orchestrator | Thursday 19 March 2026 02:25:54 +0000 (0:00:00.210) 0:00:09.499 ******** 2026-03-19 02:26:00.508787 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508793 | orchestrator | 2026-03-19 02:26:00.508800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:00.508806 | orchestrator | Thursday 19 March 2026 02:25:54 +0000 (0:00:00.209) 0:00:09.708 ******** 2026-03-19 02:26:00.508812 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508818 | orchestrator | 2026-03-19 02:26:00.508824 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 02:26:00.508830 | orchestrator | Thursday 19 March 2026 02:25:54 +0000 (0:00:00.200) 0:00:09.909 ******** 2026-03-19 02:26:00.508836 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508843 | orchestrator | 2026-03-19 02:26:00.508849 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 02:26:00.508855 | orchestrator | Thursday 19 March 2026 02:25:54 +0000 (0:00:00.140) 0:00:10.049 ******** 2026-03-19 02:26:00.508861 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '55f97389-0425-5b31-8593-f3b3ad53d7f9'}}) 2026-03-19 02:26:00.508886 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '432058d8-20d3-534b-84ac-2a35b6cfcd9e'}}) 2026-03-19 02:26:00.508893 | orchestrator | 2026-03-19 02:26:00.508898 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 02:26:00.508905 | orchestrator | Thursday 19 March 2026 02:25:54 +0000 (0:00:00.200) 0:00:10.250 ******** 2026-03-19 02:26:00.508912 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}) 2026-03-19 02:26:00.508918 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}) 2026-03-19 02:26:00.508924 | orchestrator | 2026-03-19 02:26:00.508934 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 02:26:00.508943 | orchestrator | Thursday 19 March 2026 02:25:56 +0000 (0:00:02.032) 0:00:12.282 ******** 2026-03-19 02:26:00.508953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.508964 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.508973 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.508981 | orchestrator | 2026-03-19 02:26:00.508990 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 02:26:00.508999 | orchestrator | Thursday 19 March 2026 02:25:56 +0000 (0:00:00.142) 0:00:12.425 ******** 2026-03-19 02:26:00.509008 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}) 2026-03-19 02:26:00.509017 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}) 2026-03-19 02:26:00.509027 | orchestrator | 2026-03-19 02:26:00.509036 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 02:26:00.509045 | orchestrator | Thursday 19 March 2026 02:25:58 +0000 (0:00:01.526) 0:00:13.951 ******** 2026-03-19 02:26:00.509054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509073 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509082 | orchestrator | 2026-03-19 02:26:00.509092 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 02:26:00.509102 | orchestrator | Thursday 19 March 2026 02:25:58 +0000 (0:00:00.162) 0:00:14.114 ******** 2026-03-19 02:26:00.509133 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509146 | orchestrator | 2026-03-19 02:26:00.509155 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 02:26:00.509164 | orchestrator | Thursday 19 March 2026 02:25:58 +0000 (0:00:00.333) 0:00:14.447 ******** 2026-03-19 02:26:00.509173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509197 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509206 | orchestrator | 2026-03-19 02:26:00.509215 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 02:26:00.509224 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.161) 0:00:14.608 ******** 2026-03-19 02:26:00.509243 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509251 | orchestrator | 2026-03-19 02:26:00.509259 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 02:26:00.509269 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.156) 0:00:14.765 ******** 2026-03-19 02:26:00.509286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509305 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509315 | orchestrator | 2026-03-19 02:26:00.509325 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 02:26:00.509335 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.164) 0:00:14.930 ******** 2026-03-19 02:26:00.509344 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509353 | orchestrator | 2026-03-19 02:26:00.509359 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 02:26:00.509364 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.136) 0:00:15.066 ******** 2026-03-19 02:26:00.509370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509382 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509387 | orchestrator | 2026-03-19 02:26:00.509393 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 02:26:00.509399 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.173) 0:00:15.240 ******** 2026-03-19 02:26:00.509405 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:00.509411 | orchestrator | 2026-03-19 02:26:00.509417 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 02:26:00.509423 | orchestrator | Thursday 19 March 2026 02:25:59 +0000 (0:00:00.133) 0:00:15.373 ******** 2026-03-19 02:26:00.509428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509440 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509446 | orchestrator | 2026-03-19 02:26:00.509452 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 02:26:00.509457 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.158) 0:00:15.532 ******** 2026-03-19 02:26:00.509463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509475 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509480 | orchestrator | 2026-03-19 02:26:00.509486 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 02:26:00.509492 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.162) 0:00:15.694 ******** 2026-03-19 02:26:00.509498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:00.509504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:00.509519 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509525 | orchestrator | 2026-03-19 02:26:00.509531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 02:26:00.509536 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.159) 0:00:15.854 ******** 2026-03-19 02:26:00.509542 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:00.509548 | orchestrator | 2026-03-19 02:26:00.509555 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 02:26:00.509567 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.144) 0:00:15.998 ******** 2026-03-19 02:26:07.013824 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.013933 | orchestrator | 2026-03-19 02:26:07.013951 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 02:26:07.013965 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.137) 0:00:16.136 ******** 2026-03-19 02:26:07.013973 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.013981 | orchestrator | 2026-03-19 02:26:07.013989 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 02:26:07.013997 | orchestrator | Thursday 19 March 2026 02:26:00 +0000 (0:00:00.336) 0:00:16.472 ******** 2026-03-19 02:26:07.014004 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:26:07.014012 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 02:26:07.014069 | orchestrator | } 2026-03-19 02:26:07.014077 | orchestrator | 2026-03-19 02:26:07.014085 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 02:26:07.014092 | orchestrator | Thursday 19 March 2026 02:26:01 +0000 (0:00:00.150) 0:00:16.622 ******** 2026-03-19 02:26:07.014100 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:26:07.014107 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 02:26:07.014115 | orchestrator | } 2026-03-19 02:26:07.014122 | orchestrator | 2026-03-19 02:26:07.014130 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 02:26:07.014153 | orchestrator | Thursday 19 March 2026 02:26:01 +0000 (0:00:00.143) 0:00:16.765 ******** 2026-03-19 02:26:07.014161 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:26:07.014168 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 02:26:07.014176 | orchestrator | } 2026-03-19 02:26:07.014183 | orchestrator | 2026-03-19 02:26:07.014190 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 02:26:07.014198 | orchestrator | Thursday 19 March 2026 02:26:01 +0000 (0:00:00.143) 0:00:16.909 ******** 2026-03-19 02:26:07.014205 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:07.014213 | orchestrator | 2026-03-19 02:26:07.014220 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 02:26:07.014227 | orchestrator | Thursday 19 March 2026 02:26:02 +0000 (0:00:00.675) 0:00:17.585 ******** 2026-03-19 02:26:07.014235 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:07.014242 | orchestrator | 2026-03-19 02:26:07.014274 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 02:26:07.014283 | orchestrator | Thursday 19 March 2026 02:26:02 +0000 (0:00:00.524) 0:00:18.110 ******** 2026-03-19 02:26:07.014291 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:07.014298 | orchestrator | 2026-03-19 02:26:07.014307 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 02:26:07.014316 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.514) 0:00:18.624 ******** 2026-03-19 02:26:07.014325 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:07.014334 | orchestrator | 2026-03-19 02:26:07.014342 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 02:26:07.014351 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.155) 0:00:18.780 ******** 2026-03-19 02:26:07.014360 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014368 | orchestrator | 2026-03-19 02:26:07.014377 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 02:26:07.014408 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.113) 0:00:18.894 ******** 2026-03-19 02:26:07.014417 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014426 | orchestrator | 2026-03-19 02:26:07.014435 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 02:26:07.014443 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.098) 0:00:18.993 ******** 2026-03-19 02:26:07.014452 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:26:07.014460 | orchestrator |  "vgs_report": { 2026-03-19 02:26:07.014469 | orchestrator |  "vg": [] 2026-03-19 02:26:07.014477 | orchestrator |  } 2026-03-19 02:26:07.014486 | orchestrator | } 2026-03-19 02:26:07.014495 | orchestrator | 2026-03-19 02:26:07.014504 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 02:26:07.014513 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.147) 0:00:19.141 ******** 2026-03-19 02:26:07.014522 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014529 | orchestrator | 2026-03-19 02:26:07.014536 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 02:26:07.014543 | orchestrator | Thursday 19 March 2026 02:26:03 +0000 (0:00:00.135) 0:00:19.276 ******** 2026-03-19 02:26:07.014550 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014557 | orchestrator | 2026-03-19 02:26:07.014564 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 02:26:07.014572 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.344) 0:00:19.621 ******** 2026-03-19 02:26:07.014579 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014586 | orchestrator | 2026-03-19 02:26:07.014593 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 02:26:07.014600 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.136) 0:00:19.757 ******** 2026-03-19 02:26:07.014607 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014614 | orchestrator | 2026-03-19 02:26:07.014622 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 02:26:07.014629 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.141) 0:00:19.899 ******** 2026-03-19 02:26:07.014636 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014643 | orchestrator | 2026-03-19 02:26:07.014650 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 02:26:07.014658 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.149) 0:00:20.049 ******** 2026-03-19 02:26:07.014686 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014694 | orchestrator | 2026-03-19 02:26:07.014701 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 02:26:07.014708 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.144) 0:00:20.193 ******** 2026-03-19 02:26:07.014715 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014723 | orchestrator | 2026-03-19 02:26:07.014730 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 02:26:07.014737 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.146) 0:00:20.340 ******** 2026-03-19 02:26:07.014760 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014768 | orchestrator | 2026-03-19 02:26:07.014775 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 02:26:07.014783 | orchestrator | Thursday 19 March 2026 02:26:04 +0000 (0:00:00.141) 0:00:20.481 ******** 2026-03-19 02:26:07.014790 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014797 | orchestrator | 2026-03-19 02:26:07.014804 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 02:26:07.014811 | orchestrator | Thursday 19 March 2026 02:26:05 +0000 (0:00:00.141) 0:00:20.622 ******** 2026-03-19 02:26:07.014819 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014826 | orchestrator | 2026-03-19 02:26:07.014833 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 02:26:07.014840 | orchestrator | Thursday 19 March 2026 02:26:05 +0000 (0:00:00.138) 0:00:20.761 ******** 2026-03-19 02:26:07.014854 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014861 | orchestrator | 2026-03-19 02:26:07.014868 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 02:26:07.014875 | orchestrator | Thursday 19 March 2026 02:26:05 +0000 (0:00:00.150) 0:00:20.911 ******** 2026-03-19 02:26:07.014882 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014890 | orchestrator | 2026-03-19 02:26:07.014901 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 02:26:07.014909 | orchestrator | Thursday 19 March 2026 02:26:05 +0000 (0:00:00.138) 0:00:21.050 ******** 2026-03-19 02:26:07.014916 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014923 | orchestrator | 2026-03-19 02:26:07.014930 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 02:26:07.014938 | orchestrator | Thursday 19 March 2026 02:26:05 +0000 (0:00:00.134) 0:00:21.184 ******** 2026-03-19 02:26:07.014945 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014952 | orchestrator | 2026-03-19 02:26:07.014959 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 02:26:07.014966 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.328) 0:00:21.513 ******** 2026-03-19 02:26:07.014975 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:07.014984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:07.014991 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.014998 | orchestrator | 2026-03-19 02:26:07.015005 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 02:26:07.015013 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.171) 0:00:21.684 ******** 2026-03-19 02:26:07.015020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:07.015027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:07.015035 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.015042 | orchestrator | 2026-03-19 02:26:07.015049 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 02:26:07.015056 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.177) 0:00:21.862 ******** 2026-03-19 02:26:07.015063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:07.015071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:07.015078 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.015085 | orchestrator | 2026-03-19 02:26:07.015092 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 02:26:07.015099 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.166) 0:00:22.029 ******** 2026-03-19 02:26:07.015106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:07.015114 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:07.015121 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.015128 | orchestrator | 2026-03-19 02:26:07.015135 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 02:26:07.015143 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.158) 0:00:22.188 ******** 2026-03-19 02:26:07.015155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:07.015163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:07.015170 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:07.015177 | orchestrator | 2026-03-19 02:26:07.015184 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 02:26:07.015191 | orchestrator | Thursday 19 March 2026 02:26:06 +0000 (0:00:00.158) 0:00:22.347 ******** 2026-03-19 02:26:07.015204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.239856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.239999 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.240024 | orchestrator | 2026-03-19 02:26:12.240043 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 02:26:12.240062 | orchestrator | Thursday 19 March 2026 02:26:07 +0000 (0:00:00.158) 0:00:22.505 ******** 2026-03-19 02:26:12.240080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.240098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.240114 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.240131 | orchestrator | 2026-03-19 02:26:12.240173 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 02:26:12.240192 | orchestrator | Thursday 19 March 2026 02:26:07 +0000 (0:00:00.157) 0:00:22.663 ******** 2026-03-19 02:26:12.240209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.240227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.240244 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.240261 | orchestrator | 2026-03-19 02:26:12.240278 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 02:26:12.240298 | orchestrator | Thursday 19 March 2026 02:26:07 +0000 (0:00:00.147) 0:00:22.810 ******** 2026-03-19 02:26:12.240317 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:12.240336 | orchestrator | 2026-03-19 02:26:12.240354 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 02:26:12.240373 | orchestrator | Thursday 19 March 2026 02:26:07 +0000 (0:00:00.531) 0:00:23.342 ******** 2026-03-19 02:26:12.240392 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:12.240412 | orchestrator | 2026-03-19 02:26:12.240431 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 02:26:12.240451 | orchestrator | Thursday 19 March 2026 02:26:08 +0000 (0:00:00.524) 0:00:23.867 ******** 2026-03-19 02:26:12.240470 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:26:12.240489 | orchestrator | 2026-03-19 02:26:12.240509 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 02:26:12.240529 | orchestrator | Thursday 19 March 2026 02:26:08 +0000 (0:00:00.146) 0:00:24.013 ******** 2026-03-19 02:26:12.240549 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'vg_name': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}) 2026-03-19 02:26:12.240569 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'vg_name': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}) 2026-03-19 02:26:12.240624 | orchestrator | 2026-03-19 02:26:12.240643 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 02:26:12.240727 | orchestrator | Thursday 19 March 2026 02:26:08 +0000 (0:00:00.157) 0:00:24.171 ******** 2026-03-19 02:26:12.240751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.240769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.240787 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.240804 | orchestrator | 2026-03-19 02:26:12.240821 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 02:26:12.240838 | orchestrator | Thursday 19 March 2026 02:26:09 +0000 (0:00:00.346) 0:00:24.517 ******** 2026-03-19 02:26:12.240854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.240871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.240888 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.240905 | orchestrator | 2026-03-19 02:26:12.240922 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 02:26:12.240938 | orchestrator | Thursday 19 March 2026 02:26:09 +0000 (0:00:00.165) 0:00:24.683 ******** 2026-03-19 02:26:12.240956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 02:26:12.240977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 02:26:12.240994 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:26:12.241011 | orchestrator | 2026-03-19 02:26:12.241027 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 02:26:12.241044 | orchestrator | Thursday 19 March 2026 02:26:09 +0000 (0:00:00.158) 0:00:24.841 ******** 2026-03-19 02:26:12.241089 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:26:12.241108 | orchestrator |  "lvm_report": { 2026-03-19 02:26:12.241126 | orchestrator |  "lv": [ 2026-03-19 02:26:12.241143 | orchestrator |  { 2026-03-19 02:26:12.241161 | orchestrator |  "lv_name": "osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e", 2026-03-19 02:26:12.241180 | orchestrator |  "vg_name": "ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e" 2026-03-19 02:26:12.241197 | orchestrator |  }, 2026-03-19 02:26:12.241214 | orchestrator |  { 2026-03-19 02:26:12.241231 | orchestrator |  "lv_name": "osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9", 2026-03-19 02:26:12.241247 | orchestrator |  "vg_name": "ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9" 2026-03-19 02:26:12.241264 | orchestrator |  } 2026-03-19 02:26:12.241281 | orchestrator |  ], 2026-03-19 02:26:12.241298 | orchestrator |  "pv": [ 2026-03-19 02:26:12.241315 | orchestrator |  { 2026-03-19 02:26:12.241332 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 02:26:12.241350 | orchestrator |  "vg_name": "ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9" 2026-03-19 02:26:12.241367 | orchestrator |  }, 2026-03-19 02:26:12.241384 | orchestrator |  { 2026-03-19 02:26:12.241414 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 02:26:12.241431 | orchestrator |  "vg_name": "ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e" 2026-03-19 02:26:12.241448 | orchestrator |  } 2026-03-19 02:26:12.241464 | orchestrator |  ] 2026-03-19 02:26:12.241481 | orchestrator |  } 2026-03-19 02:26:12.241499 | orchestrator | } 2026-03-19 02:26:12.241530 | orchestrator | 2026-03-19 02:26:12.241548 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 02:26:12.241564 | orchestrator | 2026-03-19 02:26:12.241581 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:26:12.241599 | orchestrator | Thursday 19 March 2026 02:26:09 +0000 (0:00:00.298) 0:00:25.140 ******** 2026-03-19 02:26:12.241618 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-19 02:26:12.241635 | orchestrator | 2026-03-19 02:26:12.241652 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:26:12.241699 | orchestrator | Thursday 19 March 2026 02:26:09 +0000 (0:00:00.260) 0:00:25.400 ******** 2026-03-19 02:26:12.241717 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:12.241734 | orchestrator | 2026-03-19 02:26:12.241751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.241768 | orchestrator | Thursday 19 March 2026 02:26:10 +0000 (0:00:00.243) 0:00:25.644 ******** 2026-03-19 02:26:12.241784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-19 02:26:12.241801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-19 02:26:12.241817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-19 02:26:12.241834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-19 02:26:12.241850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-19 02:26:12.241868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-19 02:26:12.241885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-19 02:26:12.241902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-19 02:26:12.241920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-19 02:26:12.241936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-19 02:26:12.241951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-19 02:26:12.241966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-19 02:26:12.241983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-19 02:26:12.242001 | orchestrator | 2026-03-19 02:26:12.242087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242109 | orchestrator | Thursday 19 March 2026 02:26:10 +0000 (0:00:00.421) 0:00:26.066 ******** 2026-03-19 02:26:12.242127 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242147 | orchestrator | 2026-03-19 02:26:12.242164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242182 | orchestrator | Thursday 19 March 2026 02:26:10 +0000 (0:00:00.204) 0:00:26.270 ******** 2026-03-19 02:26:12.242201 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242220 | orchestrator | 2026-03-19 02:26:12.242238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242253 | orchestrator | Thursday 19 March 2026 02:26:11 +0000 (0:00:00.663) 0:00:26.934 ******** 2026-03-19 02:26:12.242263 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242273 | orchestrator | 2026-03-19 02:26:12.242282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242292 | orchestrator | Thursday 19 March 2026 02:26:11 +0000 (0:00:00.198) 0:00:27.132 ******** 2026-03-19 02:26:12.242301 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242311 | orchestrator | 2026-03-19 02:26:12.242320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242330 | orchestrator | Thursday 19 March 2026 02:26:11 +0000 (0:00:00.203) 0:00:27.335 ******** 2026-03-19 02:26:12.242351 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242361 | orchestrator | 2026-03-19 02:26:12.242371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:12.242380 | orchestrator | Thursday 19 March 2026 02:26:12 +0000 (0:00:00.189) 0:00:27.524 ******** 2026-03-19 02:26:12.242390 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:12.242399 | orchestrator | 2026-03-19 02:26:12.242423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.777415 | orchestrator | Thursday 19 March 2026 02:26:12 +0000 (0:00:00.204) 0:00:27.728 ******** 2026-03-19 02:26:23.777599 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.777627 | orchestrator | 2026-03-19 02:26:23.777648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.777741 | orchestrator | Thursday 19 March 2026 02:26:12 +0000 (0:00:00.205) 0:00:27.934 ******** 2026-03-19 02:26:23.777760 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.777777 | orchestrator | 2026-03-19 02:26:23.777796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.777813 | orchestrator | Thursday 19 March 2026 02:26:12 +0000 (0:00:00.212) 0:00:28.147 ******** 2026-03-19 02:26:23.777831 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e) 2026-03-19 02:26:23.777851 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e) 2026-03-19 02:26:23.777869 | orchestrator | 2026-03-19 02:26:23.777911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.777934 | orchestrator | Thursday 19 March 2026 02:26:13 +0000 (0:00:00.425) 0:00:28.573 ******** 2026-03-19 02:26:23.777956 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5) 2026-03-19 02:26:23.777979 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5) 2026-03-19 02:26:23.777999 | orchestrator | 2026-03-19 02:26:23.778114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.778136 | orchestrator | Thursday 19 March 2026 02:26:13 +0000 (0:00:00.422) 0:00:28.995 ******** 2026-03-19 02:26:23.778154 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e) 2026-03-19 02:26:23.778172 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e) 2026-03-19 02:26:23.778188 | orchestrator | 2026-03-19 02:26:23.778206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.778225 | orchestrator | Thursday 19 March 2026 02:26:13 +0000 (0:00:00.439) 0:00:29.435 ******** 2026-03-19 02:26:23.778245 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8) 2026-03-19 02:26:23.778262 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8) 2026-03-19 02:26:23.778278 | orchestrator | 2026-03-19 02:26:23.778294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:23.778312 | orchestrator | Thursday 19 March 2026 02:26:14 +0000 (0:00:00.653) 0:00:30.088 ******** 2026-03-19 02:26:23.778328 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:26:23.778344 | orchestrator | 2026-03-19 02:26:23.778361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.778378 | orchestrator | Thursday 19 March 2026 02:26:15 +0000 (0:00:00.574) 0:00:30.663 ******** 2026-03-19 02:26:23.778393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-19 02:26:23.778409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-19 02:26:23.778426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-19 02:26:23.778481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-19 02:26:23.778498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-19 02:26:23.778513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-19 02:26:23.778529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-19 02:26:23.778546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-19 02:26:23.778562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-19 02:26:23.778578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-19 02:26:23.778595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-19 02:26:23.778611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-19 02:26:23.778628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-19 02:26:23.778644 | orchestrator | 2026-03-19 02:26:23.778694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.778712 | orchestrator | Thursday 19 March 2026 02:26:16 +0000 (0:00:00.889) 0:00:31.552 ******** 2026-03-19 02:26:23.778729 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.778746 | orchestrator | 2026-03-19 02:26:23.778763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.778779 | orchestrator | Thursday 19 March 2026 02:26:16 +0000 (0:00:00.295) 0:00:31.848 ******** 2026-03-19 02:26:23.778796 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.778812 | orchestrator | 2026-03-19 02:26:23.778829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.778845 | orchestrator | Thursday 19 March 2026 02:26:16 +0000 (0:00:00.210) 0:00:32.059 ******** 2026-03-19 02:26:23.778861 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.778877 | orchestrator | 2026-03-19 02:26:23.778926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.778943 | orchestrator | Thursday 19 March 2026 02:26:16 +0000 (0:00:00.215) 0:00:32.274 ******** 2026-03-19 02:26:23.778960 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.778977 | orchestrator | 2026-03-19 02:26:23.778993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779009 | orchestrator | Thursday 19 March 2026 02:26:16 +0000 (0:00:00.205) 0:00:32.480 ******** 2026-03-19 02:26:23.779025 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779041 | orchestrator | 2026-03-19 02:26:23.779056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779072 | orchestrator | Thursday 19 March 2026 02:26:17 +0000 (0:00:00.212) 0:00:32.692 ******** 2026-03-19 02:26:23.779088 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779104 | orchestrator | 2026-03-19 02:26:23.779120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779136 | orchestrator | Thursday 19 March 2026 02:26:17 +0000 (0:00:00.203) 0:00:32.896 ******** 2026-03-19 02:26:23.779167 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779183 | orchestrator | 2026-03-19 02:26:23.779199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779216 | orchestrator | Thursday 19 March 2026 02:26:17 +0000 (0:00:00.203) 0:00:33.100 ******** 2026-03-19 02:26:23.779232 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779247 | orchestrator | 2026-03-19 02:26:23.779263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779279 | orchestrator | Thursday 19 March 2026 02:26:17 +0000 (0:00:00.200) 0:00:33.301 ******** 2026-03-19 02:26:23.779295 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-19 02:26:23.779328 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-19 02:26:23.779344 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-19 02:26:23.779359 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-19 02:26:23.779375 | orchestrator | 2026-03-19 02:26:23.779391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779408 | orchestrator | Thursday 19 March 2026 02:26:18 +0000 (0:00:00.943) 0:00:34.244 ******** 2026-03-19 02:26:23.779424 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779441 | orchestrator | 2026-03-19 02:26:23.779457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779474 | orchestrator | Thursday 19 March 2026 02:26:19 +0000 (0:00:00.633) 0:00:34.878 ******** 2026-03-19 02:26:23.779491 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779507 | orchestrator | 2026-03-19 02:26:23.779522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779540 | orchestrator | Thursday 19 March 2026 02:26:19 +0000 (0:00:00.207) 0:00:35.085 ******** 2026-03-19 02:26:23.779557 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779573 | orchestrator | 2026-03-19 02:26:23.779593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:23.779612 | orchestrator | Thursday 19 March 2026 02:26:19 +0000 (0:00:00.218) 0:00:35.303 ******** 2026-03-19 02:26:23.779630 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779647 | orchestrator | 2026-03-19 02:26:23.779715 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 02:26:23.779734 | orchestrator | Thursday 19 March 2026 02:26:20 +0000 (0:00:00.214) 0:00:35.518 ******** 2026-03-19 02:26:23.779751 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.779768 | orchestrator | 2026-03-19 02:26:23.779786 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 02:26:23.779802 | orchestrator | Thursday 19 March 2026 02:26:20 +0000 (0:00:00.133) 0:00:35.652 ******** 2026-03-19 02:26:23.779820 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b653c337-740c-52f4-bc46-3e8e37039a81'}}) 2026-03-19 02:26:23.779838 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}}) 2026-03-19 02:26:23.779856 | orchestrator | 2026-03-19 02:26:23.779874 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 02:26:23.779892 | orchestrator | Thursday 19 March 2026 02:26:20 +0000 (0:00:00.188) 0:00:35.841 ******** 2026-03-19 02:26:23.779911 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}) 2026-03-19 02:26:23.779931 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}) 2026-03-19 02:26:23.779948 | orchestrator | 2026-03-19 02:26:23.779966 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 02:26:23.779984 | orchestrator | Thursday 19 March 2026 02:26:22 +0000 (0:00:01.880) 0:00:37.721 ******** 2026-03-19 02:26:23.780002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:23.780020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:23.780037 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:23.780055 | orchestrator | 2026-03-19 02:26:23.780073 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 02:26:23.780091 | orchestrator | Thursday 19 March 2026 02:26:22 +0000 (0:00:00.163) 0:00:37.885 ******** 2026-03-19 02:26:23.780108 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}) 2026-03-19 02:26:23.780162 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}) 2026-03-19 02:26:29.413368 | orchestrator | 2026-03-19 02:26:29.413533 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 02:26:29.413550 | orchestrator | Thursday 19 March 2026 02:26:23 +0000 (0:00:01.379) 0:00:39.264 ******** 2026-03-19 02:26:29.413561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.413574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.413584 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413595 | orchestrator | 2026-03-19 02:26:29.413621 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 02:26:29.413632 | orchestrator | Thursday 19 March 2026 02:26:23 +0000 (0:00:00.152) 0:00:39.416 ******** 2026-03-19 02:26:29.413641 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413710 | orchestrator | 2026-03-19 02:26:29.413723 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 02:26:29.413732 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.136) 0:00:39.552 ******** 2026-03-19 02:26:29.413742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.413752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.413762 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413772 | orchestrator | 2026-03-19 02:26:29.413782 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 02:26:29.413791 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.147) 0:00:39.699 ******** 2026-03-19 02:26:29.413802 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413811 | orchestrator | 2026-03-19 02:26:29.413821 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 02:26:29.413831 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.148) 0:00:39.848 ******** 2026-03-19 02:26:29.413840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.413850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.413860 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413871 | orchestrator | 2026-03-19 02:26:29.413881 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 02:26:29.413891 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.357) 0:00:40.205 ******** 2026-03-19 02:26:29.413900 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413910 | orchestrator | 2026-03-19 02:26:29.413920 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 02:26:29.413930 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.140) 0:00:40.346 ******** 2026-03-19 02:26:29.413940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.413950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.413960 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.413969 | orchestrator | 2026-03-19 02:26:29.413979 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 02:26:29.414011 | orchestrator | Thursday 19 March 2026 02:26:24 +0000 (0:00:00.150) 0:00:40.496 ******** 2026-03-19 02:26:29.414090 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:29.414102 | orchestrator | 2026-03-19 02:26:29.414111 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 02:26:29.414121 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.143) 0:00:40.640 ******** 2026-03-19 02:26:29.414131 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.414140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.414150 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414159 | orchestrator | 2026-03-19 02:26:29.414169 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 02:26:29.414178 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.162) 0:00:40.802 ******** 2026-03-19 02:26:29.414188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.414197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.414207 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414217 | orchestrator | 2026-03-19 02:26:29.414226 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 02:26:29.414257 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.160) 0:00:40.962 ******** 2026-03-19 02:26:29.414267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:29.414277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:29.414287 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414297 | orchestrator | 2026-03-19 02:26:29.414306 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 02:26:29.414316 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.162) 0:00:41.124 ******** 2026-03-19 02:26:29.414332 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414342 | orchestrator | 2026-03-19 02:26:29.414352 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 02:26:29.414362 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.152) 0:00:41.276 ******** 2026-03-19 02:26:29.414371 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414381 | orchestrator | 2026-03-19 02:26:29.414391 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 02:26:29.414400 | orchestrator | Thursday 19 March 2026 02:26:25 +0000 (0:00:00.136) 0:00:41.413 ******** 2026-03-19 02:26:29.414410 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414420 | orchestrator | 2026-03-19 02:26:29.414429 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 02:26:29.414439 | orchestrator | Thursday 19 March 2026 02:26:26 +0000 (0:00:00.142) 0:00:41.555 ******** 2026-03-19 02:26:29.414448 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:26:29.414458 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 02:26:29.414468 | orchestrator | } 2026-03-19 02:26:29.414478 | orchestrator | 2026-03-19 02:26:29.414487 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 02:26:29.414497 | orchestrator | Thursday 19 March 2026 02:26:26 +0000 (0:00:00.137) 0:00:41.692 ******** 2026-03-19 02:26:29.414507 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:26:29.414517 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 02:26:29.414535 | orchestrator | } 2026-03-19 02:26:29.414545 | orchestrator | 2026-03-19 02:26:29.414554 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 02:26:29.414564 | orchestrator | Thursday 19 March 2026 02:26:26 +0000 (0:00:00.145) 0:00:41.838 ******** 2026-03-19 02:26:29.414573 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:26:29.414583 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 02:26:29.414593 | orchestrator | } 2026-03-19 02:26:29.414603 | orchestrator | 2026-03-19 02:26:29.414612 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 02:26:29.414622 | orchestrator | Thursday 19 March 2026 02:26:26 +0000 (0:00:00.344) 0:00:42.182 ******** 2026-03-19 02:26:29.414632 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:29.414641 | orchestrator | 2026-03-19 02:26:29.414667 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 02:26:29.414677 | orchestrator | Thursday 19 March 2026 02:26:27 +0000 (0:00:00.529) 0:00:42.712 ******** 2026-03-19 02:26:29.414687 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:29.414697 | orchestrator | 2026-03-19 02:26:29.414706 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 02:26:29.414716 | orchestrator | Thursday 19 March 2026 02:26:27 +0000 (0:00:00.535) 0:00:43.247 ******** 2026-03-19 02:26:29.414726 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:29.414736 | orchestrator | 2026-03-19 02:26:29.414745 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 02:26:29.414755 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.544) 0:00:43.792 ******** 2026-03-19 02:26:29.414765 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:29.414775 | orchestrator | 2026-03-19 02:26:29.414785 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 02:26:29.414794 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.152) 0:00:43.944 ******** 2026-03-19 02:26:29.414804 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414813 | orchestrator | 2026-03-19 02:26:29.414823 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 02:26:29.414833 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.125) 0:00:44.070 ******** 2026-03-19 02:26:29.414842 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414852 | orchestrator | 2026-03-19 02:26:29.414862 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 02:26:29.414871 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.129) 0:00:44.200 ******** 2026-03-19 02:26:29.414881 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:26:29.414891 | orchestrator |  "vgs_report": { 2026-03-19 02:26:29.414901 | orchestrator |  "vg": [] 2026-03-19 02:26:29.414912 | orchestrator |  } 2026-03-19 02:26:29.414922 | orchestrator | } 2026-03-19 02:26:29.414934 | orchestrator | 2026-03-19 02:26:29.414951 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 02:26:29.414968 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.148) 0:00:44.348 ******** 2026-03-19 02:26:29.414984 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.414999 | orchestrator | 2026-03-19 02:26:29.415015 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 02:26:29.415033 | orchestrator | Thursday 19 March 2026 02:26:28 +0000 (0:00:00.139) 0:00:44.488 ******** 2026-03-19 02:26:29.415049 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.415064 | orchestrator | 2026-03-19 02:26:29.415081 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 02:26:29.415093 | orchestrator | Thursday 19 March 2026 02:26:29 +0000 (0:00:00.137) 0:00:44.625 ******** 2026-03-19 02:26:29.415103 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.415112 | orchestrator | 2026-03-19 02:26:29.415122 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 02:26:29.415132 | orchestrator | Thursday 19 March 2026 02:26:29 +0000 (0:00:00.128) 0:00:44.754 ******** 2026-03-19 02:26:29.415149 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:29.415159 | orchestrator | 2026-03-19 02:26:29.415177 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 02:26:34.273820 | orchestrator | Thursday 19 March 2026 02:26:29 +0000 (0:00:00.146) 0:00:44.900 ******** 2026-03-19 02:26:34.273971 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.273987 | orchestrator | 2026-03-19 02:26:34.273997 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 02:26:34.274005 | orchestrator | Thursday 19 March 2026 02:26:29 +0000 (0:00:00.340) 0:00:45.241 ******** 2026-03-19 02:26:34.274064 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.274809 | orchestrator | 2026-03-19 02:26:34.274837 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 02:26:34.274846 | orchestrator | Thursday 19 March 2026 02:26:29 +0000 (0:00:00.145) 0:00:45.386 ******** 2026-03-19 02:26:34.274855 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.274863 | orchestrator | 2026-03-19 02:26:34.274886 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 02:26:34.274895 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.144) 0:00:45.531 ******** 2026-03-19 02:26:34.274903 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.274911 | orchestrator | 2026-03-19 02:26:34.274919 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 02:26:34.274927 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.145) 0:00:45.677 ******** 2026-03-19 02:26:34.274935 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.274942 | orchestrator | 2026-03-19 02:26:34.274950 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 02:26:34.274958 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.150) 0:00:45.827 ******** 2026-03-19 02:26:34.274966 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.274974 | orchestrator | 2026-03-19 02:26:34.274982 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 02:26:34.274991 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.143) 0:00:45.971 ******** 2026-03-19 02:26:34.274999 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275007 | orchestrator | 2026-03-19 02:26:34.275015 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 02:26:34.275023 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.148) 0:00:46.119 ******** 2026-03-19 02:26:34.275030 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275038 | orchestrator | 2026-03-19 02:26:34.275050 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 02:26:34.275063 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.162) 0:00:46.282 ******** 2026-03-19 02:26:34.275080 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275100 | orchestrator | 2026-03-19 02:26:34.275113 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 02:26:34.275125 | orchestrator | Thursday 19 March 2026 02:26:30 +0000 (0:00:00.148) 0:00:46.430 ******** 2026-03-19 02:26:34.275138 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275151 | orchestrator | 2026-03-19 02:26:34.275163 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 02:26:34.275176 | orchestrator | Thursday 19 March 2026 02:26:31 +0000 (0:00:00.135) 0:00:46.565 ******** 2026-03-19 02:26:34.275192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275223 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275236 | orchestrator | 2026-03-19 02:26:34.275248 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 02:26:34.275287 | orchestrator | Thursday 19 March 2026 02:26:31 +0000 (0:00:00.158) 0:00:46.723 ******** 2026-03-19 02:26:34.275296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275313 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275321 | orchestrator | 2026-03-19 02:26:34.275329 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 02:26:34.275337 | orchestrator | Thursday 19 March 2026 02:26:31 +0000 (0:00:00.168) 0:00:46.892 ******** 2026-03-19 02:26:34.275345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275360 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275369 | orchestrator | 2026-03-19 02:26:34.275377 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 02:26:34.275385 | orchestrator | Thursday 19 March 2026 02:26:31 +0000 (0:00:00.364) 0:00:47.256 ******** 2026-03-19 02:26:34.275393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275410 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275417 | orchestrator | 2026-03-19 02:26:34.275449 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 02:26:34.275458 | orchestrator | Thursday 19 March 2026 02:26:31 +0000 (0:00:00.158) 0:00:47.414 ******** 2026-03-19 02:26:34.275466 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275482 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275490 | orchestrator | 2026-03-19 02:26:34.275505 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 02:26:34.275513 | orchestrator | Thursday 19 March 2026 02:26:32 +0000 (0:00:00.169) 0:00:47.584 ******** 2026-03-19 02:26:34.275521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275537 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275545 | orchestrator | 2026-03-19 02:26:34.275553 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 02:26:34.275561 | orchestrator | Thursday 19 March 2026 02:26:32 +0000 (0:00:00.171) 0:00:47.755 ******** 2026-03-19 02:26:34.275569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275585 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275599 | orchestrator | 2026-03-19 02:26:34.275607 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 02:26:34.275615 | orchestrator | Thursday 19 March 2026 02:26:32 +0000 (0:00:00.159) 0:00:47.914 ******** 2026-03-19 02:26:34.275623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275639 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275647 | orchestrator | 2026-03-19 02:26:34.275688 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 02:26:34.275701 | orchestrator | Thursday 19 March 2026 02:26:32 +0000 (0:00:00.156) 0:00:48.071 ******** 2026-03-19 02:26:34.275714 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:34.275728 | orchestrator | 2026-03-19 02:26:34.275740 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 02:26:34.275756 | orchestrator | Thursday 19 March 2026 02:26:33 +0000 (0:00:00.523) 0:00:48.594 ******** 2026-03-19 02:26:34.275776 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:34.275788 | orchestrator | 2026-03-19 02:26:34.275802 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 02:26:34.275815 | orchestrator | Thursday 19 March 2026 02:26:33 +0000 (0:00:00.534) 0:00:49.129 ******** 2026-03-19 02:26:34.275828 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:26:34.275841 | orchestrator | 2026-03-19 02:26:34.275854 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 02:26:34.275863 | orchestrator | Thursday 19 March 2026 02:26:33 +0000 (0:00:00.147) 0:00:49.276 ******** 2026-03-19 02:26:34.275871 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'vg_name': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}) 2026-03-19 02:26:34.275881 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'vg_name': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}) 2026-03-19 02:26:34.275889 | orchestrator | 2026-03-19 02:26:34.275897 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 02:26:34.275904 | orchestrator | Thursday 19 March 2026 02:26:33 +0000 (0:00:00.175) 0:00:49.452 ******** 2026-03-19 02:26:34.275912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:34.275929 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:34.275937 | orchestrator | 2026-03-19 02:26:34.275945 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 02:26:34.275953 | orchestrator | Thursday 19 March 2026 02:26:34 +0000 (0:00:00.161) 0:00:49.613 ******** 2026-03-19 02:26:34.275961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:34.275978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:40.803611 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:40.803757 | orchestrator | 2026-03-19 02:26:40.803767 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 02:26:40.803773 | orchestrator | Thursday 19 March 2026 02:26:34 +0000 (0:00:00.149) 0:00:49.762 ******** 2026-03-19 02:26:40.803778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 02:26:40.803815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 02:26:40.803820 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:26:40.803824 | orchestrator | 2026-03-19 02:26:40.803828 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 02:26:40.803832 | orchestrator | Thursday 19 March 2026 02:26:34 +0000 (0:00:00.360) 0:00:50.122 ******** 2026-03-19 02:26:40.803836 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:26:40.803840 | orchestrator |  "lvm_report": { 2026-03-19 02:26:40.803846 | orchestrator |  "lv": [ 2026-03-19 02:26:40.803850 | orchestrator |  { 2026-03-19 02:26:40.803855 | orchestrator |  "lv_name": "osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8", 2026-03-19 02:26:40.803859 | orchestrator |  "vg_name": "ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8" 2026-03-19 02:26:40.803863 | orchestrator |  }, 2026-03-19 02:26:40.803868 | orchestrator |  { 2026-03-19 02:26:40.803872 | orchestrator |  "lv_name": "osd-block-b653c337-740c-52f4-bc46-3e8e37039a81", 2026-03-19 02:26:40.803875 | orchestrator |  "vg_name": "ceph-b653c337-740c-52f4-bc46-3e8e37039a81" 2026-03-19 02:26:40.803879 | orchestrator |  } 2026-03-19 02:26:40.803883 | orchestrator |  ], 2026-03-19 02:26:40.803888 | orchestrator |  "pv": [ 2026-03-19 02:26:40.803892 | orchestrator |  { 2026-03-19 02:26:40.803896 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 02:26:40.803900 | orchestrator |  "vg_name": "ceph-b653c337-740c-52f4-bc46-3e8e37039a81" 2026-03-19 02:26:40.803904 | orchestrator |  }, 2026-03-19 02:26:40.803908 | orchestrator |  { 2026-03-19 02:26:40.803912 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 02:26:40.803916 | orchestrator |  "vg_name": "ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8" 2026-03-19 02:26:40.803920 | orchestrator |  } 2026-03-19 02:26:40.803924 | orchestrator |  ] 2026-03-19 02:26:40.803928 | orchestrator |  } 2026-03-19 02:26:40.803933 | orchestrator | } 2026-03-19 02:26:40.803937 | orchestrator | 2026-03-19 02:26:40.803941 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-19 02:26:40.803945 | orchestrator | 2026-03-19 02:26:40.803949 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 02:26:40.803953 | orchestrator | Thursday 19 March 2026 02:26:34 +0000 (0:00:00.316) 0:00:50.439 ******** 2026-03-19 02:26:40.803957 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-19 02:26:40.803961 | orchestrator | 2026-03-19 02:26:40.803965 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-19 02:26:40.803969 | orchestrator | Thursday 19 March 2026 02:26:35 +0000 (0:00:00.273) 0:00:50.713 ******** 2026-03-19 02:26:40.803973 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:40.803977 | orchestrator | 2026-03-19 02:26:40.803980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.803984 | orchestrator | Thursday 19 March 2026 02:26:35 +0000 (0:00:00.246) 0:00:50.959 ******** 2026-03-19 02:26:40.803988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-19 02:26:40.803992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-19 02:26:40.803996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-19 02:26:40.804000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-19 02:26:40.804004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-19 02:26:40.804008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-19 02:26:40.804012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-19 02:26:40.804020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-19 02:26:40.804024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-19 02:26:40.804028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-19 02:26:40.804032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-19 02:26:40.804036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-19 02:26:40.804040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-19 02:26:40.804043 | orchestrator | 2026-03-19 02:26:40.804047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804051 | orchestrator | Thursday 19 March 2026 02:26:35 +0000 (0:00:00.442) 0:00:51.401 ******** 2026-03-19 02:26:40.804055 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804059 | orchestrator | 2026-03-19 02:26:40.804063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804067 | orchestrator | Thursday 19 March 2026 02:26:36 +0000 (0:00:00.231) 0:00:51.633 ******** 2026-03-19 02:26:40.804071 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804075 | orchestrator | 2026-03-19 02:26:40.804079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804093 | orchestrator | Thursday 19 March 2026 02:26:36 +0000 (0:00:00.193) 0:00:51.827 ******** 2026-03-19 02:26:40.804098 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804102 | orchestrator | 2026-03-19 02:26:40.804105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804109 | orchestrator | Thursday 19 March 2026 02:26:36 +0000 (0:00:00.201) 0:00:52.028 ******** 2026-03-19 02:26:40.804113 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804117 | orchestrator | 2026-03-19 02:26:40.804121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804125 | orchestrator | Thursday 19 March 2026 02:26:37 +0000 (0:00:00.610) 0:00:52.638 ******** 2026-03-19 02:26:40.804129 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804140 | orchestrator | 2026-03-19 02:26:40.804144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804148 | orchestrator | Thursday 19 March 2026 02:26:37 +0000 (0:00:00.214) 0:00:52.853 ******** 2026-03-19 02:26:40.804152 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804156 | orchestrator | 2026-03-19 02:26:40.804160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804164 | orchestrator | Thursday 19 March 2026 02:26:37 +0000 (0:00:00.213) 0:00:53.066 ******** 2026-03-19 02:26:40.804169 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804173 | orchestrator | 2026-03-19 02:26:40.804180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804187 | orchestrator | Thursday 19 March 2026 02:26:37 +0000 (0:00:00.210) 0:00:53.276 ******** 2026-03-19 02:26:40.804194 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:40.804201 | orchestrator | 2026-03-19 02:26:40.804207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804214 | orchestrator | Thursday 19 March 2026 02:26:37 +0000 (0:00:00.207) 0:00:53.484 ******** 2026-03-19 02:26:40.804222 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77) 2026-03-19 02:26:40.804230 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77) 2026-03-19 02:26:40.804238 | orchestrator | 2026-03-19 02:26:40.804243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804247 | orchestrator | Thursday 19 March 2026 02:26:38 +0000 (0:00:00.456) 0:00:53.940 ******** 2026-03-19 02:26:40.804286 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97) 2026-03-19 02:26:40.804295 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97) 2026-03-19 02:26:40.804305 | orchestrator | 2026-03-19 02:26:40.804310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804314 | orchestrator | Thursday 19 March 2026 02:26:38 +0000 (0:00:00.447) 0:00:54.388 ******** 2026-03-19 02:26:40.804319 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff) 2026-03-19 02:26:40.804324 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff) 2026-03-19 02:26:40.804328 | orchestrator | 2026-03-19 02:26:40.804332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804337 | orchestrator | Thursday 19 March 2026 02:26:39 +0000 (0:00:00.452) 0:00:54.840 ******** 2026-03-19 02:26:40.804341 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906) 2026-03-19 02:26:40.804346 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906) 2026-03-19 02:26:40.804350 | orchestrator | 2026-03-19 02:26:40.804355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-19 02:26:40.804359 | orchestrator | Thursday 19 March 2026 02:26:39 +0000 (0:00:00.442) 0:00:55.282 ******** 2026-03-19 02:26:40.804364 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-19 02:26:40.804369 | orchestrator | 2026-03-19 02:26:40.804373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:40.804378 | orchestrator | Thursday 19 March 2026 02:26:40 +0000 (0:00:00.347) 0:00:55.630 ******** 2026-03-19 02:26:40.804382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-19 02:26:40.804387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-19 02:26:40.804391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-19 02:26:40.804396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-19 02:26:40.804400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-19 02:26:40.804404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-19 02:26:40.804409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-19 02:26:40.804413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-19 02:26:40.804418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-19 02:26:40.804423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-19 02:26:40.804427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-19 02:26:40.804435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-19 02:26:50.083172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-19 02:26:50.083312 | orchestrator | 2026-03-19 02:26:50.083332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083344 | orchestrator | Thursday 19 March 2026 02:26:40 +0000 (0:00:00.658) 0:00:56.289 ******** 2026-03-19 02:26:50.083356 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083368 | orchestrator | 2026-03-19 02:26:50.083378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083406 | orchestrator | Thursday 19 March 2026 02:26:41 +0000 (0:00:00.219) 0:00:56.509 ******** 2026-03-19 02:26:50.083418 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083498 | orchestrator | 2026-03-19 02:26:50.083516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083531 | orchestrator | Thursday 19 March 2026 02:26:41 +0000 (0:00:00.224) 0:00:56.733 ******** 2026-03-19 02:26:50.083547 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083563 | orchestrator | 2026-03-19 02:26:50.083579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083596 | orchestrator | Thursday 19 March 2026 02:26:41 +0000 (0:00:00.209) 0:00:56.943 ******** 2026-03-19 02:26:50.083613 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083629 | orchestrator | 2026-03-19 02:26:50.083675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083695 | orchestrator | Thursday 19 March 2026 02:26:41 +0000 (0:00:00.214) 0:00:57.157 ******** 2026-03-19 02:26:50.083715 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083734 | orchestrator | 2026-03-19 02:26:50.083752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083771 | orchestrator | Thursday 19 March 2026 02:26:41 +0000 (0:00:00.215) 0:00:57.372 ******** 2026-03-19 02:26:50.083791 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083811 | orchestrator | 2026-03-19 02:26:50.083830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083848 | orchestrator | Thursday 19 March 2026 02:26:42 +0000 (0:00:00.220) 0:00:57.592 ******** 2026-03-19 02:26:50.083867 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083882 | orchestrator | 2026-03-19 02:26:50.083896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083909 | orchestrator | Thursday 19 March 2026 02:26:42 +0000 (0:00:00.202) 0:00:57.796 ******** 2026-03-19 02:26:50.083922 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.083934 | orchestrator | 2026-03-19 02:26:50.083947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.083959 | orchestrator | Thursday 19 March 2026 02:26:42 +0000 (0:00:00.212) 0:00:58.008 ******** 2026-03-19 02:26:50.083972 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-19 02:26:50.083985 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-19 02:26:50.083997 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-19 02:26:50.084009 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-19 02:26:50.084022 | orchestrator | 2026-03-19 02:26:50.084033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.084044 | orchestrator | Thursday 19 March 2026 02:26:43 +0000 (0:00:00.928) 0:00:58.936 ******** 2026-03-19 02:26:50.084055 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084065 | orchestrator | 2026-03-19 02:26:50.084076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.084087 | orchestrator | Thursday 19 March 2026 02:26:44 +0000 (0:00:00.697) 0:00:59.634 ******** 2026-03-19 02:26:50.084098 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084108 | orchestrator | 2026-03-19 02:26:50.084119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.084135 | orchestrator | Thursday 19 March 2026 02:26:44 +0000 (0:00:00.213) 0:00:59.848 ******** 2026-03-19 02:26:50.084154 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084171 | orchestrator | 2026-03-19 02:26:50.084188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-19 02:26:50.084205 | orchestrator | Thursday 19 March 2026 02:26:44 +0000 (0:00:00.225) 0:01:00.073 ******** 2026-03-19 02:26:50.084221 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084240 | orchestrator | 2026-03-19 02:26:50.084259 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-19 02:26:50.084278 | orchestrator | Thursday 19 March 2026 02:26:44 +0000 (0:00:00.209) 0:01:00.283 ******** 2026-03-19 02:26:50.084295 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084314 | orchestrator | 2026-03-19 02:26:50.084339 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-19 02:26:50.084350 | orchestrator | Thursday 19 March 2026 02:26:44 +0000 (0:00:00.141) 0:01:00.425 ******** 2026-03-19 02:26:50.084362 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ab7a01d4-aa20-5ffe-8eee-b634151ce758'}}) 2026-03-19 02:26:50.084374 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb497169-2d92-5217-a604-0fdb844d53ba'}}) 2026-03-19 02:26:50.084384 | orchestrator | 2026-03-19 02:26:50.084395 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-19 02:26:50.084406 | orchestrator | Thursday 19 March 2026 02:26:45 +0000 (0:00:00.205) 0:01:00.630 ******** 2026-03-19 02:26:50.084419 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}) 2026-03-19 02:26:50.084431 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}) 2026-03-19 02:26:50.084442 | orchestrator | 2026-03-19 02:26:50.084453 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-19 02:26:50.084488 | orchestrator | Thursday 19 March 2026 02:26:46 +0000 (0:00:01.857) 0:01:02.487 ******** 2026-03-19 02:26:50.084500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:50.084513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:50.084523 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084534 | orchestrator | 2026-03-19 02:26:50.084554 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-19 02:26:50.084565 | orchestrator | Thursday 19 March 2026 02:26:47 +0000 (0:00:00.176) 0:01:02.664 ******** 2026-03-19 02:26:50.084576 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}) 2026-03-19 02:26:50.084587 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}) 2026-03-19 02:26:50.084598 | orchestrator | 2026-03-19 02:26:50.084609 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-19 02:26:50.084619 | orchestrator | Thursday 19 March 2026 02:26:48 +0000 (0:00:01.378) 0:01:04.043 ******** 2026-03-19 02:26:50.084630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:50.084668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:50.084688 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084706 | orchestrator | 2026-03-19 02:26:50.084726 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-19 02:26:50.084747 | orchestrator | Thursday 19 March 2026 02:26:48 +0000 (0:00:00.148) 0:01:04.192 ******** 2026-03-19 02:26:50.084765 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084781 | orchestrator | 2026-03-19 02:26:50.084792 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-19 02:26:50.084803 | orchestrator | Thursday 19 March 2026 02:26:48 +0000 (0:00:00.121) 0:01:04.314 ******** 2026-03-19 02:26:50.084814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:50.084825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:50.084846 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084857 | orchestrator | 2026-03-19 02:26:50.084868 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-19 02:26:50.084878 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.324) 0:01:04.638 ******** 2026-03-19 02:26:50.084889 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084900 | orchestrator | 2026-03-19 02:26:50.084911 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-19 02:26:50.084921 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.152) 0:01:04.790 ******** 2026-03-19 02:26:50.084932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:50.084948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:50.084966 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.084984 | orchestrator | 2026-03-19 02:26:50.085002 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-19 02:26:50.085020 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.159) 0:01:04.950 ******** 2026-03-19 02:26:50.085038 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.085057 | orchestrator | 2026-03-19 02:26:50.085094 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-19 02:26:50.085135 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.138) 0:01:05.089 ******** 2026-03-19 02:26:50.085157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:50.085176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:50.085193 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:50.085211 | orchestrator | 2026-03-19 02:26:50.085229 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-19 02:26:50.085246 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.168) 0:01:05.257 ******** 2026-03-19 02:26:50.085265 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:50.085283 | orchestrator | 2026-03-19 02:26:50.085302 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-19 02:26:50.085320 | orchestrator | Thursday 19 March 2026 02:26:49 +0000 (0:00:00.146) 0:01:05.403 ******** 2026-03-19 02:26:50.085355 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:56.858969 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:56.859109 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859135 | orchestrator | 2026-03-19 02:26:56.859155 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-19 02:26:56.859176 | orchestrator | Thursday 19 March 2026 02:26:50 +0000 (0:00:00.171) 0:01:05.575 ******** 2026-03-19 02:26:56.859219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:56.859236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:56.859253 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859272 | orchestrator | 2026-03-19 02:26:56.859290 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-19 02:26:56.859307 | orchestrator | Thursday 19 March 2026 02:26:50 +0000 (0:00:00.166) 0:01:05.741 ******** 2026-03-19 02:26:56.859357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:56.859374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:56.859390 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859408 | orchestrator | 2026-03-19 02:26:56.859425 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-19 02:26:56.859440 | orchestrator | Thursday 19 March 2026 02:26:50 +0000 (0:00:00.167) 0:01:05.909 ******** 2026-03-19 02:26:56.859450 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859460 | orchestrator | 2026-03-19 02:26:56.859469 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-19 02:26:56.859479 | orchestrator | Thursday 19 March 2026 02:26:50 +0000 (0:00:00.160) 0:01:06.069 ******** 2026-03-19 02:26:56.859488 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859499 | orchestrator | 2026-03-19 02:26:56.859508 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-19 02:26:56.859533 | orchestrator | Thursday 19 March 2026 02:26:50 +0000 (0:00:00.145) 0:01:06.214 ******** 2026-03-19 02:26:56.859544 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859563 | orchestrator | 2026-03-19 02:26:56.859573 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-19 02:26:56.859583 | orchestrator | Thursday 19 March 2026 02:26:51 +0000 (0:00:00.345) 0:01:06.560 ******** 2026-03-19 02:26:56.859592 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:26:56.859602 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-19 02:26:56.859612 | orchestrator | } 2026-03-19 02:26:56.859622 | orchestrator | 2026-03-19 02:26:56.859631 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-19 02:26:56.859667 | orchestrator | Thursday 19 March 2026 02:26:51 +0000 (0:00:00.150) 0:01:06.711 ******** 2026-03-19 02:26:56.859683 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:26:56.859700 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-19 02:26:56.859717 | orchestrator | } 2026-03-19 02:26:56.859733 | orchestrator | 2026-03-19 02:26:56.859747 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-19 02:26:56.859756 | orchestrator | Thursday 19 March 2026 02:26:51 +0000 (0:00:00.150) 0:01:06.861 ******** 2026-03-19 02:26:56.859766 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:26:56.859775 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-19 02:26:56.859785 | orchestrator | } 2026-03-19 02:26:56.859795 | orchestrator | 2026-03-19 02:26:56.859804 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-19 02:26:56.859814 | orchestrator | Thursday 19 March 2026 02:26:51 +0000 (0:00:00.145) 0:01:07.007 ******** 2026-03-19 02:26:56.859824 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:56.859833 | orchestrator | 2026-03-19 02:26:56.859843 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-19 02:26:56.859853 | orchestrator | Thursday 19 March 2026 02:26:52 +0000 (0:00:00.539) 0:01:07.547 ******** 2026-03-19 02:26:56.859862 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:56.859872 | orchestrator | 2026-03-19 02:26:56.859881 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-19 02:26:56.859891 | orchestrator | Thursday 19 March 2026 02:26:52 +0000 (0:00:00.545) 0:01:08.092 ******** 2026-03-19 02:26:56.859901 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:56.859910 | orchestrator | 2026-03-19 02:26:56.859920 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-19 02:26:56.859930 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.546) 0:01:08.639 ******** 2026-03-19 02:26:56.859939 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:26:56.859949 | orchestrator | 2026-03-19 02:26:56.859958 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-19 02:26:56.859977 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.200) 0:01:08.839 ******** 2026-03-19 02:26:56.859987 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.859996 | orchestrator | 2026-03-19 02:26:56.860006 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-19 02:26:56.860016 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.121) 0:01:08.961 ******** 2026-03-19 02:26:56.860026 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860035 | orchestrator | 2026-03-19 02:26:56.860045 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-19 02:26:56.860055 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.148) 0:01:09.109 ******** 2026-03-19 02:26:56.860064 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:26:56.860074 | orchestrator |  "vgs_report": { 2026-03-19 02:26:56.860085 | orchestrator |  "vg": [] 2026-03-19 02:26:56.860115 | orchestrator |  } 2026-03-19 02:26:56.860125 | orchestrator | } 2026-03-19 02:26:56.860135 | orchestrator | 2026-03-19 02:26:56.860145 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-19 02:26:56.860160 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.167) 0:01:09.277 ******** 2026-03-19 02:26:56.860184 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860203 | orchestrator | 2026-03-19 02:26:56.860219 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-19 02:26:56.860235 | orchestrator | Thursday 19 March 2026 02:26:53 +0000 (0:00:00.156) 0:01:09.433 ******** 2026-03-19 02:26:56.860258 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860273 | orchestrator | 2026-03-19 02:26:56.860289 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-19 02:26:56.860305 | orchestrator | Thursday 19 March 2026 02:26:54 +0000 (0:00:00.332) 0:01:09.766 ******** 2026-03-19 02:26:56.860322 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860339 | orchestrator | 2026-03-19 02:26:56.860355 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-19 02:26:56.860371 | orchestrator | Thursday 19 March 2026 02:26:54 +0000 (0:00:00.143) 0:01:09.910 ******** 2026-03-19 02:26:56.860384 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860393 | orchestrator | 2026-03-19 02:26:56.860403 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-19 02:26:56.860412 | orchestrator | Thursday 19 March 2026 02:26:54 +0000 (0:00:00.153) 0:01:10.063 ******** 2026-03-19 02:26:56.860422 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860431 | orchestrator | 2026-03-19 02:26:56.860441 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-19 02:26:56.860450 | orchestrator | Thursday 19 March 2026 02:26:54 +0000 (0:00:00.143) 0:01:10.206 ******** 2026-03-19 02:26:56.860460 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860469 | orchestrator | 2026-03-19 02:26:56.860479 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-19 02:26:56.860488 | orchestrator | Thursday 19 March 2026 02:26:54 +0000 (0:00:00.165) 0:01:10.372 ******** 2026-03-19 02:26:56.860497 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860507 | orchestrator | 2026-03-19 02:26:56.860516 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-19 02:26:56.860526 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.172) 0:01:10.544 ******** 2026-03-19 02:26:56.860535 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860545 | orchestrator | 2026-03-19 02:26:56.860554 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-19 02:26:56.860564 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.167) 0:01:10.712 ******** 2026-03-19 02:26:56.860579 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860594 | orchestrator | 2026-03-19 02:26:56.860607 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-19 02:26:56.860620 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.140) 0:01:10.852 ******** 2026-03-19 02:26:56.860726 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860741 | orchestrator | 2026-03-19 02:26:56.860755 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-19 02:26:56.860771 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.145) 0:01:10.998 ******** 2026-03-19 02:26:56.860785 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860800 | orchestrator | 2026-03-19 02:26:56.860813 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-19 02:26:56.860826 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.150) 0:01:11.149 ******** 2026-03-19 02:26:56.860840 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860855 | orchestrator | 2026-03-19 02:26:56.860870 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-19 02:26:56.860884 | orchestrator | Thursday 19 March 2026 02:26:55 +0000 (0:00:00.167) 0:01:11.316 ******** 2026-03-19 02:26:56.860897 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860912 | orchestrator | 2026-03-19 02:26:56.860927 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-19 02:26:56.860943 | orchestrator | Thursday 19 March 2026 02:26:56 +0000 (0:00:00.381) 0:01:11.698 ******** 2026-03-19 02:26:56.860959 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.860976 | orchestrator | 2026-03-19 02:26:56.860992 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-19 02:26:56.861008 | orchestrator | Thursday 19 March 2026 02:26:56 +0000 (0:00:00.150) 0:01:11.848 ******** 2026-03-19 02:26:56.861025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:56.861036 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:56.861045 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.861055 | orchestrator | 2026-03-19 02:26:56.861065 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-19 02:26:56.861074 | orchestrator | Thursday 19 March 2026 02:26:56 +0000 (0:00:00.165) 0:01:12.014 ******** 2026-03-19 02:26:56.861084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:26:56.861094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:26:56.861105 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:26:56.861121 | orchestrator | 2026-03-19 02:26:56.861140 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-19 02:26:56.861164 | orchestrator | Thursday 19 March 2026 02:26:56 +0000 (0:00:00.168) 0:01:12.182 ******** 2026-03-19 02:26:56.861195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010204 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010224 | orchestrator | 2026-03-19 02:27:00.010260 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-19 02:27:00.010272 | orchestrator | Thursday 19 March 2026 02:26:56 +0000 (0:00:00.169) 0:01:12.351 ******** 2026-03-19 02:27:00.010282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010334 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010346 | orchestrator | 2026-03-19 02:27:00.010357 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-19 02:27:00.010368 | orchestrator | Thursday 19 March 2026 02:26:57 +0000 (0:00:00.149) 0:01:12.501 ******** 2026-03-19 02:27:00.010378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010398 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010408 | orchestrator | 2026-03-19 02:27:00.010419 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-19 02:27:00.010428 | orchestrator | Thursday 19 March 2026 02:26:57 +0000 (0:00:00.171) 0:01:12.672 ******** 2026-03-19 02:27:00.010434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010447 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010454 | orchestrator | 2026-03-19 02:27:00.010460 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-19 02:27:00.010466 | orchestrator | Thursday 19 March 2026 02:26:57 +0000 (0:00:00.157) 0:01:12.830 ******** 2026-03-19 02:27:00.010472 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010484 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010491 | orchestrator | 2026-03-19 02:27:00.010497 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-19 02:27:00.010503 | orchestrator | Thursday 19 March 2026 02:26:57 +0000 (0:00:00.164) 0:01:12.995 ******** 2026-03-19 02:27:00.010509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010521 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010528 | orchestrator | 2026-03-19 02:27:00.010534 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-19 02:27:00.010540 | orchestrator | Thursday 19 March 2026 02:26:57 +0000 (0:00:00.146) 0:01:13.142 ******** 2026-03-19 02:27:00.010546 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:00.010553 | orchestrator | 2026-03-19 02:27:00.010559 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-19 02:27:00.010565 | orchestrator | Thursday 19 March 2026 02:26:58 +0000 (0:00:00.561) 0:01:13.704 ******** 2026-03-19 02:27:00.010571 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:00.010578 | orchestrator | 2026-03-19 02:27:00.010584 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-19 02:27:00.010591 | orchestrator | Thursday 19 March 2026 02:26:58 +0000 (0:00:00.790) 0:01:14.494 ******** 2026-03-19 02:27:00.010598 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:00.010606 | orchestrator | 2026-03-19 02:27:00.010613 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-19 02:27:00.010621 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.158) 0:01:14.652 ******** 2026-03-19 02:27:00.010634 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'vg_name': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}) 2026-03-19 02:27:00.010685 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'vg_name': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}) 2026-03-19 02:27:00.010693 | orchestrator | 2026-03-19 02:27:00.010700 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-19 02:27:00.010708 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.165) 0:01:14.818 ******** 2026-03-19 02:27:00.010730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010752 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010759 | orchestrator | 2026-03-19 02:27:00.010767 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-19 02:27:00.010778 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.160) 0:01:14.979 ******** 2026-03-19 02:27:00.010788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010809 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010820 | orchestrator | 2026-03-19 02:27:00.010830 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-19 02:27:00.010840 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.183) 0:01:15.162 ******** 2026-03-19 02:27:00.010850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 02:27:00.010860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 02:27:00.010870 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:00.010880 | orchestrator | 2026-03-19 02:27:00.010891 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-19 02:27:00.010901 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.156) 0:01:15.319 ******** 2026-03-19 02:27:00.010911 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:27:00.010922 | orchestrator |  "lvm_report": { 2026-03-19 02:27:00.010934 | orchestrator |  "lv": [ 2026-03-19 02:27:00.010945 | orchestrator |  { 2026-03-19 02:27:00.010956 | orchestrator |  "lv_name": "osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758", 2026-03-19 02:27:00.010967 | orchestrator |  "vg_name": "ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758" 2026-03-19 02:27:00.010978 | orchestrator |  }, 2026-03-19 02:27:00.010986 | orchestrator |  { 2026-03-19 02:27:00.010992 | orchestrator |  "lv_name": "osd-block-eb497169-2d92-5217-a604-0fdb844d53ba", 2026-03-19 02:27:00.010999 | orchestrator |  "vg_name": "ceph-eb497169-2d92-5217-a604-0fdb844d53ba" 2026-03-19 02:27:00.011005 | orchestrator |  } 2026-03-19 02:27:00.011011 | orchestrator |  ], 2026-03-19 02:27:00.011017 | orchestrator |  "pv": [ 2026-03-19 02:27:00.011023 | orchestrator |  { 2026-03-19 02:27:00.011029 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-19 02:27:00.011036 | orchestrator |  "vg_name": "ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758" 2026-03-19 02:27:00.011042 | orchestrator |  }, 2026-03-19 02:27:00.011048 | orchestrator |  { 2026-03-19 02:27:00.011054 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-19 02:27:00.011071 | orchestrator |  "vg_name": "ceph-eb497169-2d92-5217-a604-0fdb844d53ba" 2026-03-19 02:27:00.011077 | orchestrator |  } 2026-03-19 02:27:00.011084 | orchestrator |  ] 2026-03-19 02:27:00.011090 | orchestrator |  } 2026-03-19 02:27:00.011096 | orchestrator | } 2026-03-19 02:27:00.011103 | orchestrator | 2026-03-19 02:27:00.011109 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:27:00.011115 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 02:27:00.011122 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 02:27:00.011128 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-19 02:27:00.011134 | orchestrator | 2026-03-19 02:27:00.011141 | orchestrator | 2026-03-19 02:27:00.011147 | orchestrator | 2026-03-19 02:27:00.011153 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:27:00.011159 | orchestrator | Thursday 19 March 2026 02:26:59 +0000 (0:00:00.159) 0:01:15.478 ******** 2026-03-19 02:27:00.011165 | orchestrator | =============================================================================== 2026-03-19 02:27:00.011173 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2026-03-19 02:27:00.011184 | orchestrator | Create block LVs -------------------------------------------------------- 4.28s 2026-03-19 02:27:00.011194 | orchestrator | Add known partitions to the list of available block devices ------------- 1.97s 2026-03-19 02:27:00.011204 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.85s 2026-03-19 02:27:00.011214 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-03-19 02:27:00.011224 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.62s 2026-03-19 02:27:00.011235 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2026-03-19 02:27:00.011245 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-19 02:27:00.011263 | orchestrator | Add known links to the list of available block devices ------------------ 1.39s 2026-03-19 02:27:00.392567 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-03-19 02:27:00.392687 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-19 02:27:00.392697 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.82s 2026-03-19 02:27:00.392722 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-03-19 02:27:00.392729 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.81s 2026-03-19 02:27:00.392735 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-19 02:27:00.392741 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2026-03-19 02:27:00.392747 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-03-19 02:27:00.392753 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-19 02:27:00.392759 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.70s 2026-03-19 02:27:00.392765 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-19 02:27:12.755325 | orchestrator | 2026-03-19 02:27:12 | INFO  | Task 38160dcc-e8aa-4817-867d-bc34bcb98ac2 (facts) was prepared for execution. 2026-03-19 02:27:12.755409 | orchestrator | 2026-03-19 02:27:12 | INFO  | It takes a moment until task 38160dcc-e8aa-4817-867d-bc34bcb98ac2 (facts) has been started and output is visible here. 2026-03-19 02:27:26.001532 | orchestrator | 2026-03-19 02:27:26.001736 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 02:27:26.001787 | orchestrator | 2026-03-19 02:27:26.001798 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 02:27:26.001807 | orchestrator | Thursday 19 March 2026 02:27:16 +0000 (0:00:00.275) 0:00:00.276 ******** 2026-03-19 02:27:26.001816 | orchestrator | ok: [testbed-manager] 2026-03-19 02:27:26.001826 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:26.001835 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:26.001843 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:26.001852 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:26.001861 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:26.001869 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:26.001878 | orchestrator | 2026-03-19 02:27:26.001887 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 02:27:26.001896 | orchestrator | Thursday 19 March 2026 02:27:18 +0000 (0:00:01.184) 0:00:01.461 ******** 2026-03-19 02:27:26.001904 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:27:26.001914 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:26.001923 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:26.001931 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:26.001940 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:26.001949 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:26.001957 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:26.001966 | orchestrator | 2026-03-19 02:27:26.001975 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 02:27:26.001984 | orchestrator | 2026-03-19 02:27:26.001992 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 02:27:26.002001 | orchestrator | Thursday 19 March 2026 02:27:19 +0000 (0:00:01.299) 0:00:02.760 ******** 2026-03-19 02:27:26.002010 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:26.002052 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:26.002063 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:26.002072 | orchestrator | ok: [testbed-manager] 2026-03-19 02:27:26.002080 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:26.002090 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:26.002105 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:26.002118 | orchestrator | 2026-03-19 02:27:26.002127 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 02:27:26.002136 | orchestrator | 2026-03-19 02:27:26.002144 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 02:27:26.002153 | orchestrator | Thursday 19 March 2026 02:27:25 +0000 (0:00:05.607) 0:00:08.367 ******** 2026-03-19 02:27:26.002162 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:27:26.002170 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:26.002179 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:26.002188 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:26.002196 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:26.002205 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:26.002213 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:26.002222 | orchestrator | 2026-03-19 02:27:26.002230 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:27:26.002239 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002250 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002259 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002268 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002277 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002293 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002307 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:27:26.002321 | orchestrator | 2026-03-19 02:27:26.002335 | orchestrator | 2026-03-19 02:27:26.002348 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:27:26.002381 | orchestrator | Thursday 19 March 2026 02:27:25 +0000 (0:00:00.538) 0:00:08.906 ******** 2026-03-19 02:27:26.002396 | orchestrator | =============================================================================== 2026-03-19 02:27:26.002412 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.61s 2026-03-19 02:27:26.002430 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-03-19 02:27:26.002445 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-03-19 02:27:26.002458 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-03-19 02:27:28.298505 | orchestrator | 2026-03-19 02:27:28 | INFO  | Task 34c8025d-389e-427e-991e-6f965ce26b3a (ceph) was prepared for execution. 2026-03-19 02:27:28.298596 | orchestrator | 2026-03-19 02:27:28 | INFO  | It takes a moment until task 34c8025d-389e-427e-991e-6f965ce26b3a (ceph) has been started and output is visible here. 2026-03-19 02:27:46.085971 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 02:27:46.086163 | orchestrator | 2.16.14 2026-03-19 02:27:46.086188 | orchestrator | 2026-03-19 02:27:46.086203 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-19 02:27:46.086218 | orchestrator | 2026-03-19 02:27:46.086232 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 02:27:46.086244 | orchestrator | Thursday 19 March 2026 02:27:33 +0000 (0:00:00.795) 0:00:00.795 ******** 2026-03-19 02:27:46.086253 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:27:46.086262 | orchestrator | 2026-03-19 02:27:46.086270 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 02:27:46.086278 | orchestrator | Thursday 19 March 2026 02:27:34 +0000 (0:00:01.172) 0:00:01.967 ******** 2026-03-19 02:27:46.086287 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086295 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086303 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086311 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086318 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086326 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086335 | orchestrator | 2026-03-19 02:27:46.086343 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 02:27:46.086351 | orchestrator | Thursday 19 March 2026 02:27:35 +0000 (0:00:01.290) 0:00:03.257 ******** 2026-03-19 02:27:46.086359 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086367 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086375 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086382 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086390 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086398 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086406 | orchestrator | 2026-03-19 02:27:46.086413 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 02:27:46.086421 | orchestrator | Thursday 19 March 2026 02:27:36 +0000 (0:00:00.748) 0:00:04.006 ******** 2026-03-19 02:27:46.086429 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086437 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086456 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086464 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086498 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086508 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086517 | orchestrator | 2026-03-19 02:27:46.086526 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 02:27:46.086535 | orchestrator | Thursday 19 March 2026 02:27:37 +0000 (0:00:00.919) 0:00:04.925 ******** 2026-03-19 02:27:46.086544 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086553 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086562 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086571 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086579 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086588 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086597 | orchestrator | 2026-03-19 02:27:46.086606 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 02:27:46.086633 | orchestrator | Thursday 19 March 2026 02:27:38 +0000 (0:00:00.791) 0:00:05.717 ******** 2026-03-19 02:27:46.086643 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086652 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086661 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086670 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086679 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086688 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086696 | orchestrator | 2026-03-19 02:27:46.086705 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 02:27:46.086714 | orchestrator | Thursday 19 March 2026 02:27:38 +0000 (0:00:00.609) 0:00:06.327 ******** 2026-03-19 02:27:46.086724 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086733 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086742 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086751 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086760 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086769 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086777 | orchestrator | 2026-03-19 02:27:46.086788 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 02:27:46.086796 | orchestrator | Thursday 19 March 2026 02:27:39 +0000 (0:00:00.784) 0:00:07.111 ******** 2026-03-19 02:27:46.086804 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:46.086813 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:46.086821 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:46.086829 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:46.086836 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:46.086844 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:46.086852 | orchestrator | 2026-03-19 02:27:46.086860 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 02:27:46.086867 | orchestrator | Thursday 19 March 2026 02:27:40 +0000 (0:00:00.561) 0:00:07.672 ******** 2026-03-19 02:27:46.086875 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.086883 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.086890 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.086898 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.086906 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.086928 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.086936 | orchestrator | 2026-03-19 02:27:46.086944 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 02:27:46.086952 | orchestrator | Thursday 19 March 2026 02:27:40 +0000 (0:00:00.763) 0:00:08.436 ******** 2026-03-19 02:27:46.086960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:27:46.086968 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:27:46.086975 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:27:46.086983 | orchestrator | 2026-03-19 02:27:46.086991 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 02:27:46.086999 | orchestrator | Thursday 19 March 2026 02:27:41 +0000 (0:00:00.634) 0:00:09.070 ******** 2026-03-19 02:27:46.087013 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:46.087021 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:46.087028 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:46.087057 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:46.087065 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:46.087073 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:46.087081 | orchestrator | 2026-03-19 02:27:46.087089 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 02:27:46.087097 | orchestrator | Thursday 19 March 2026 02:27:42 +0000 (0:00:00.708) 0:00:09.778 ******** 2026-03-19 02:27:46.087105 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:27:46.087112 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:27:46.087120 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:27:46.087128 | orchestrator | 2026-03-19 02:27:46.087136 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 02:27:46.087143 | orchestrator | Thursday 19 March 2026 02:27:44 +0000 (0:00:02.401) 0:00:12.180 ******** 2026-03-19 02:27:46.087151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 02:27:46.087160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 02:27:46.087167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 02:27:46.087175 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:46.087183 | orchestrator | 2026-03-19 02:27:46.087191 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 02:27:46.087199 | orchestrator | Thursday 19 March 2026 02:27:45 +0000 (0:00:00.396) 0:00:12.577 ******** 2026-03-19 02:27:46.087209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087236 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:46.087243 | orchestrator | 2026-03-19 02:27:46.087251 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 02:27:46.087259 | orchestrator | Thursday 19 March 2026 02:27:45 +0000 (0:00:00.601) 0:00:13.178 ******** 2026-03-19 02:27:46.087269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:46.087304 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:46.087312 | orchestrator | 2026-03-19 02:27:46.087324 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 02:27:46.087332 | orchestrator | Thursday 19 March 2026 02:27:45 +0000 (0:00:00.161) 0:00:13.339 ******** 2026-03-19 02:27:46.087351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 02:27:43.185341', 'end': '2026-03-19 02:27:43.224355', 'delta': '0:00:00.039014', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 02:27:55.570301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 02:27:43.740970', 'end': '2026-03-19 02:27:43.795400', 'delta': '0:00:00.054430', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 02:27:55.570421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 02:27:44.335082', 'end': '2026-03-19 02:27:44.385197', 'delta': '0:00:00.050115', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 02:27:55.570436 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570447 | orchestrator | 2026-03-19 02:27:55.570457 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 02:27:55.570466 | orchestrator | Thursday 19 March 2026 02:27:46 +0000 (0:00:00.199) 0:00:13.539 ******** 2026-03-19 02:27:55.570474 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:27:55.570483 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:27:55.570491 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:27:55.570499 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:27:55.570507 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:27:55.570515 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:27:55.570522 | orchestrator | 2026-03-19 02:27:55.570531 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 02:27:55.570539 | orchestrator | Thursday 19 March 2026 02:27:46 +0000 (0:00:00.721) 0:00:14.261 ******** 2026-03-19 02:27:55.570547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:27:55.570555 | orchestrator | 2026-03-19 02:27:55.570563 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 02:27:55.570575 | orchestrator | Thursday 19 March 2026 02:27:47 +0000 (0:00:00.885) 0:00:15.146 ******** 2026-03-19 02:27:55.570680 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570691 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.570699 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.570707 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.570715 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.570723 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.570731 | orchestrator | 2026-03-19 02:27:55.570739 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 02:27:55.570747 | orchestrator | Thursday 19 March 2026 02:27:48 +0000 (0:00:00.854) 0:00:16.001 ******** 2026-03-19 02:27:55.570755 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570763 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.570771 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.570779 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.570787 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.570794 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.570802 | orchestrator | 2026-03-19 02:27:55.570810 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 02:27:55.570819 | orchestrator | Thursday 19 March 2026 02:27:49 +0000 (0:00:01.076) 0:00:17.078 ******** 2026-03-19 02:27:55.570828 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570837 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.570846 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.570854 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.570863 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.570887 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.570896 | orchestrator | 2026-03-19 02:27:55.570905 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 02:27:55.570914 | orchestrator | Thursday 19 March 2026 02:27:50 +0000 (0:00:00.581) 0:00:17.659 ******** 2026-03-19 02:27:55.570922 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570931 | orchestrator | 2026-03-19 02:27:55.570940 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 02:27:55.570949 | orchestrator | Thursday 19 March 2026 02:27:50 +0000 (0:00:00.116) 0:00:17.776 ******** 2026-03-19 02:27:55.570958 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.570967 | orchestrator | 2026-03-19 02:27:55.570977 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 02:27:55.570992 | orchestrator | Thursday 19 March 2026 02:27:50 +0000 (0:00:00.235) 0:00:18.011 ******** 2026-03-19 02:27:55.571007 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571021 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571035 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571050 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571060 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571070 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571079 | orchestrator | 2026-03-19 02:27:55.571111 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 02:27:55.571125 | orchestrator | Thursday 19 March 2026 02:27:51 +0000 (0:00:00.765) 0:00:18.777 ******** 2026-03-19 02:27:55.571139 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571154 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571167 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571181 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571195 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571209 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571223 | orchestrator | 2026-03-19 02:27:55.571237 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 02:27:55.571249 | orchestrator | Thursday 19 March 2026 02:27:51 +0000 (0:00:00.587) 0:00:19.364 ******** 2026-03-19 02:27:55.571257 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571265 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571273 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571294 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571307 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571319 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571332 | orchestrator | 2026-03-19 02:27:55.571346 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 02:27:55.571359 | orchestrator | Thursday 19 March 2026 02:27:52 +0000 (0:00:00.767) 0:00:20.132 ******** 2026-03-19 02:27:55.571373 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571381 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571389 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571397 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571406 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571420 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571434 | orchestrator | 2026-03-19 02:27:55.571447 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 02:27:55.571461 | orchestrator | Thursday 19 March 2026 02:27:53 +0000 (0:00:00.587) 0:00:20.719 ******** 2026-03-19 02:27:55.571470 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571481 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571495 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571508 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571522 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571536 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571549 | orchestrator | 2026-03-19 02:27:55.571563 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 02:27:55.571576 | orchestrator | Thursday 19 March 2026 02:27:54 +0000 (0:00:00.826) 0:00:21.546 ******** 2026-03-19 02:27:55.571590 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571604 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571641 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571655 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571667 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571681 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571696 | orchestrator | 2026-03-19 02:27:55.571705 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 02:27:55.571713 | orchestrator | Thursday 19 March 2026 02:27:54 +0000 (0:00:00.562) 0:00:22.108 ******** 2026-03-19 02:27:55.571722 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.571735 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:55.571748 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:55.571761 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:55.571774 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:55.571789 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:55.571803 | orchestrator | 2026-03-19 02:27:55.571817 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 02:27:55.571830 | orchestrator | Thursday 19 March 2026 02:27:55 +0000 (0:00:00.790) 0:00:22.899 ******** 2026-03-19 02:27:55.571846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.571870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.571906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.697263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.697292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.697299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.697305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.697319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.697331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.940991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.941452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.941476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.941502 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:55.941525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.941548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.979357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.979482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:55.979833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.979868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:55.979899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.252000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.252141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.252200 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:56.252230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.252398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.252407 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:56.252416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.252441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.478418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.478433 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:56.478445 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:56.478456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.478548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:27:56.678603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.678735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:27:56.678785 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:27:56.678792 | orchestrator | 2026-03-19 02:27:56.678799 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 02:27:56.678805 | orchestrator | Thursday 19 March 2026 02:27:56 +0000 (0:00:01.025) 0:00:23.925 ******** 2026-03-19 02:27:56.678813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.678905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.981453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.984966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985181 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:56.985308 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242146 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:27:57.242160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242254 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242279 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:27:57.242286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242313 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.242336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.346914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347044 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347059 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347095 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347111 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347181 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.347214 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485234 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485393 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:27:57.485398 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485402 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485406 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485410 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485414 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485421 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.485432 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692124 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692240 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692323 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:27:57.692337 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:27:57.692369 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692382 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692394 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692405 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692453 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:27:57.692473 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:28:04.683852 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:28:04.684068 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:28:04.684097 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.684117 | orchestrator | 2026-03-19 02:28:04.684137 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 02:28:04.684156 | orchestrator | Thursday 19 March 2026 02:27:57 +0000 (0:00:01.223) 0:00:25.149 ******** 2026-03-19 02:28:04.684173 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:04.684190 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:04.684207 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:04.684224 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:04.684241 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:04.684258 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:04.684275 | orchestrator | 2026-03-19 02:28:04.684293 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 02:28:04.684310 | orchestrator | Thursday 19 March 2026 02:27:58 +0000 (0:00:00.922) 0:00:26.072 ******** 2026-03-19 02:28:04.684327 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:04.684344 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:04.684361 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:04.684378 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:04.684394 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:04.684410 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:04.684427 | orchestrator | 2026-03-19 02:28:04.684444 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 02:28:04.684461 | orchestrator | Thursday 19 March 2026 02:27:59 +0000 (0:00:00.776) 0:00:26.848 ******** 2026-03-19 02:28:04.684478 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:04.684495 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:04.684511 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:04.684551 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:04.684569 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:04.684586 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.684602 | orchestrator | 2026-03-19 02:28:04.684650 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 02:28:04.684669 | orchestrator | Thursday 19 March 2026 02:27:59 +0000 (0:00:00.562) 0:00:27.410 ******** 2026-03-19 02:28:04.684687 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:04.684704 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:04.684720 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:04.684737 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:04.684753 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:04.684770 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.684787 | orchestrator | 2026-03-19 02:28:04.684803 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 02:28:04.684819 | orchestrator | Thursday 19 March 2026 02:28:00 +0000 (0:00:00.782) 0:00:28.193 ******** 2026-03-19 02:28:04.684836 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:04.684854 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:04.684955 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:04.684990 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:04.685007 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:04.685024 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.685040 | orchestrator | 2026-03-19 02:28:04.685057 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 02:28:04.685075 | orchestrator | Thursday 19 March 2026 02:28:01 +0000 (0:00:00.618) 0:00:28.812 ******** 2026-03-19 02:28:04.685093 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:04.685110 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:04.685127 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:04.685145 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:04.685163 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:04.685180 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.685199 | orchestrator | 2026-03-19 02:28:04.685219 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 02:28:04.685237 | orchestrator | Thursday 19 March 2026 02:28:02 +0000 (0:00:00.806) 0:00:29.618 ******** 2026-03-19 02:28:04.685254 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 02:28:04.685266 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 02:28:04.685277 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 02:28:04.685287 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 02:28:04.685298 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 02:28:04.685309 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 02:28:04.685319 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 02:28:04.685330 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 02:28:04.685341 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-19 02:28:04.685352 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 02:28:04.685363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 02:28:04.685373 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 02:28:04.685384 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-19 02:28:04.685395 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 02:28:04.685405 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 02:28:04.685416 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-19 02:28:04.685427 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-19 02:28:04.685447 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 02:28:04.685458 | orchestrator | 2026-03-19 02:28:04.685469 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 02:28:04.685480 | orchestrator | Thursday 19 March 2026 02:28:03 +0000 (0:00:01.614) 0:00:31.233 ******** 2026-03-19 02:28:04.685491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 02:28:04.685503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 02:28:04.685520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 02:28:04.685537 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:04.685563 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 02:28:04.685584 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 02:28:04.685603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 02:28:04.685663 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:04.685682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 02:28:04.685700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 02:28:04.685716 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 02:28:04.685732 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:04.685749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 02:28:04.685767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 02:28:04.685804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 02:28:04.685821 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:04.685839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 02:28:04.685857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 02:28:04.685875 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 02:28:04.685892 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:04.685911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 02:28:04.685930 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 02:28:04.685947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 02:28:04.685958 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:04.685969 | orchestrator | 2026-03-19 02:28:04.685980 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 02:28:04.686007 | orchestrator | Thursday 19 March 2026 02:28:04 +0000 (0:00:00.899) 0:00:32.133 ******** 2026-03-19 02:28:22.286530 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.286726 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.286742 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.286750 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:28:22.286758 | orchestrator | 2026-03-19 02:28:22.286765 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 02:28:22.286774 | orchestrator | Thursday 19 March 2026 02:28:05 +0000 (0:00:01.014) 0:00:33.147 ******** 2026-03-19 02:28:22.286781 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.286789 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.286795 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.286801 | orchestrator | 2026-03-19 02:28:22.286807 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 02:28:22.286813 | orchestrator | Thursday 19 March 2026 02:28:06 +0000 (0:00:00.345) 0:00:33.493 ******** 2026-03-19 02:28:22.286820 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.286826 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.286832 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.286838 | orchestrator | 2026-03-19 02:28:22.286844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 02:28:22.286850 | orchestrator | Thursday 19 March 2026 02:28:06 +0000 (0:00:00.354) 0:00:33.847 ******** 2026-03-19 02:28:22.286857 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.286863 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.286869 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.286875 | orchestrator | 2026-03-19 02:28:22.286881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 02:28:22.286887 | orchestrator | Thursday 19 March 2026 02:28:06 +0000 (0:00:00.318) 0:00:34.166 ******** 2026-03-19 02:28:22.286894 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.286901 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:22.286907 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:22.286913 | orchestrator | 2026-03-19 02:28:22.286919 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 02:28:22.286925 | orchestrator | Thursday 19 March 2026 02:28:07 +0000 (0:00:00.672) 0:00:34.838 ******** 2026-03-19 02:28:22.286931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:28:22.286938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:28:22.286944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:28:22.286950 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.286957 | orchestrator | 2026-03-19 02:28:22.286963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 02:28:22.286979 | orchestrator | Thursday 19 March 2026 02:28:07 +0000 (0:00:00.389) 0:00:35.227 ******** 2026-03-19 02:28:22.287018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:28:22.287025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:28:22.287031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:28:22.287037 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287043 | orchestrator | 2026-03-19 02:28:22.287049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 02:28:22.287055 | orchestrator | Thursday 19 March 2026 02:28:08 +0000 (0:00:00.386) 0:00:35.613 ******** 2026-03-19 02:28:22.287076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:28:22.287083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:28:22.287089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:28:22.287095 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287101 | orchestrator | 2026-03-19 02:28:22.287107 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 02:28:22.287113 | orchestrator | Thursday 19 March 2026 02:28:08 +0000 (0:00:00.408) 0:00:36.021 ******** 2026-03-19 02:28:22.287119 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.287125 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:22.287131 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:22.287137 | orchestrator | 2026-03-19 02:28:22.287143 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 02:28:22.287149 | orchestrator | Thursday 19 March 2026 02:28:08 +0000 (0:00:00.337) 0:00:36.358 ******** 2026-03-19 02:28:22.287155 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 02:28:22.287161 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 02:28:22.287168 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 02:28:22.287173 | orchestrator | 2026-03-19 02:28:22.287179 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 02:28:22.287185 | orchestrator | Thursday 19 March 2026 02:28:09 +0000 (0:00:01.026) 0:00:37.385 ******** 2026-03-19 02:28:22.287191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:28:22.287198 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:28:22.287204 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:28:22.287211 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 02:28:22.287217 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 02:28:22.287223 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 02:28:22.287229 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 02:28:22.287235 | orchestrator | 2026-03-19 02:28:22.287241 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 02:28:22.287247 | orchestrator | Thursday 19 March 2026 02:28:10 +0000 (0:00:00.818) 0:00:38.204 ******** 2026-03-19 02:28:22.287269 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:28:22.287275 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:28:22.287281 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:28:22.287287 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 02:28:22.287294 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 02:28:22.287300 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 02:28:22.287306 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 02:28:22.287312 | orchestrator | 2026-03-19 02:28:22.287318 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:28:22.287329 | orchestrator | Thursday 19 March 2026 02:28:12 +0000 (0:00:01.897) 0:00:40.102 ******** 2026-03-19 02:28:22.287337 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:28:22.287345 | orchestrator | 2026-03-19 02:28:22.287351 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:28:22.287357 | orchestrator | Thursday 19 March 2026 02:28:13 +0000 (0:00:01.193) 0:00:41.295 ******** 2026-03-19 02:28:22.287363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:28:22.287369 | orchestrator | 2026-03-19 02:28:22.287375 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:28:22.287381 | orchestrator | Thursday 19 March 2026 02:28:15 +0000 (0:00:01.213) 0:00:42.508 ******** 2026-03-19 02:28:22.287388 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287393 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.287399 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.287406 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:22.287412 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:22.287418 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:22.287424 | orchestrator | 2026-03-19 02:28:22.287430 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:28:22.287436 | orchestrator | Thursday 19 March 2026 02:28:16 +0000 (0:00:01.205) 0:00:43.714 ******** 2026-03-19 02:28:22.287442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.287448 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.287454 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.287460 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.287467 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:22.287473 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:22.287479 | orchestrator | 2026-03-19 02:28:22.287485 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:28:22.287491 | orchestrator | Thursday 19 March 2026 02:28:16 +0000 (0:00:00.718) 0:00:44.433 ******** 2026-03-19 02:28:22.287497 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.287503 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:22.287509 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.287515 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:22.287521 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.287527 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.287533 | orchestrator | 2026-03-19 02:28:22.287543 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:28:22.287550 | orchestrator | Thursday 19 March 2026 02:28:17 +0000 (0:00:00.855) 0:00:45.288 ******** 2026-03-19 02:28:22.287556 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.287562 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.287568 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.287574 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.287580 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:22.287586 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:22.287592 | orchestrator | 2026-03-19 02:28:22.287598 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:28:22.287623 | orchestrator | Thursday 19 March 2026 02:28:18 +0000 (0:00:00.766) 0:00:46.055 ******** 2026-03-19 02:28:22.287630 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287636 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.287642 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.287648 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:22.287654 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:22.287660 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:22.287666 | orchestrator | 2026-03-19 02:28:22.287672 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:28:22.287685 | orchestrator | Thursday 19 March 2026 02:28:19 +0000 (0:00:01.303) 0:00:47.359 ******** 2026-03-19 02:28:22.287691 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287697 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.287703 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.287709 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.287715 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.287721 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.287727 | orchestrator | 2026-03-19 02:28:22.287732 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:28:22.287738 | orchestrator | Thursday 19 March 2026 02:28:20 +0000 (0:00:00.593) 0:00:47.952 ******** 2026-03-19 02:28:22.287744 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:22.287750 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:22.287756 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:22.287761 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:22.287767 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:22.287773 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:22.287779 | orchestrator | 2026-03-19 02:28:22.287785 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:28:22.287792 | orchestrator | Thursday 19 March 2026 02:28:21 +0000 (0:00:00.773) 0:00:48.726 ******** 2026-03-19 02:28:22.287798 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:22.287809 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.620489 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.620704 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.620729 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.620747 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.620764 | orchestrator | 2026-03-19 02:28:40.620782 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:28:40.620800 | orchestrator | Thursday 19 March 2026 02:28:22 +0000 (0:00:01.008) 0:00:49.734 ******** 2026-03-19 02:28:40.620817 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.620834 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.620850 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.620868 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.620884 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.620900 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.620916 | orchestrator | 2026-03-19 02:28:40.620932 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:28:40.620949 | orchestrator | Thursday 19 March 2026 02:28:23 +0000 (0:00:01.277) 0:00:51.012 ******** 2026-03-19 02:28:40.620966 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.620980 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.620993 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.621008 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621025 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621041 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621059 | orchestrator | 2026-03-19 02:28:40.621075 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:28:40.621092 | orchestrator | Thursday 19 March 2026 02:28:24 +0000 (0:00:00.645) 0:00:51.657 ******** 2026-03-19 02:28:40.621108 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.621124 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.621140 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.621156 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.621172 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.621189 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.621206 | orchestrator | 2026-03-19 02:28:40.621222 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:28:40.621238 | orchestrator | Thursday 19 March 2026 02:28:25 +0000 (0:00:00.822) 0:00:52.479 ******** 2026-03-19 02:28:40.621254 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.621271 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.621315 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.621332 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621349 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621365 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621381 | orchestrator | 2026-03-19 02:28:40.621397 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:28:40.621413 | orchestrator | Thursday 19 March 2026 02:28:25 +0000 (0:00:00.580) 0:00:53.059 ******** 2026-03-19 02:28:40.621429 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.621445 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.621462 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.621479 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621495 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621511 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621526 | orchestrator | 2026-03-19 02:28:40.621544 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:28:40.621561 | orchestrator | Thursday 19 March 2026 02:28:26 +0000 (0:00:00.804) 0:00:53.864 ******** 2026-03-19 02:28:40.621578 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.621616 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.621634 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.621648 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621664 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621691 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621708 | orchestrator | 2026-03-19 02:28:40.621725 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:28:40.621736 | orchestrator | Thursday 19 March 2026 02:28:26 +0000 (0:00:00.570) 0:00:54.435 ******** 2026-03-19 02:28:40.621745 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.621755 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.621764 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.621774 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621783 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621792 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621802 | orchestrator | 2026-03-19 02:28:40.621811 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:28:40.621821 | orchestrator | Thursday 19 March 2026 02:28:27 +0000 (0:00:00.798) 0:00:55.234 ******** 2026-03-19 02:28:40.621830 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.621840 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.621849 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.621859 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.621868 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.621877 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.621887 | orchestrator | 2026-03-19 02:28:40.621896 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:28:40.621906 | orchestrator | Thursday 19 March 2026 02:28:28 +0000 (0:00:00.590) 0:00:55.824 ******** 2026-03-19 02:28:40.621915 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.621924 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.621933 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.621941 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.621948 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.621956 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.621964 | orchestrator | 2026-03-19 02:28:40.621972 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:28:40.621980 | orchestrator | Thursday 19 March 2026 02:28:29 +0000 (0:00:00.820) 0:00:56.645 ******** 2026-03-19 02:28:40.621987 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.621995 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.622003 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.622010 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.622091 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.622100 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.622116 | orchestrator | 2026-03-19 02:28:40.622124 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:28:40.622132 | orchestrator | Thursday 19 March 2026 02:28:29 +0000 (0:00:00.611) 0:00:57.256 ******** 2026-03-19 02:28:40.622139 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:28:40.622164 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:28:40.622172 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:28:40.622180 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:28:40.622188 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:28:40.622195 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:28:40.622203 | orchestrator | 2026-03-19 02:28:40.622211 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 02:28:40.622219 | orchestrator | Thursday 19 March 2026 02:28:31 +0000 (0:00:01.243) 0:00:58.500 ******** 2026-03-19 02:28:40.622227 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:28:40.622235 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:28:40.622242 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:28:40.622250 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:28:40.622258 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:28:40.622265 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:28:40.622273 | orchestrator | 2026-03-19 02:28:40.622281 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 02:28:40.622288 | orchestrator | Thursday 19 March 2026 02:28:32 +0000 (0:00:01.659) 0:01:00.159 ******** 2026-03-19 02:28:40.622296 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:28:40.622304 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:28:40.622311 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:28:40.622319 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:28:40.622327 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:28:40.622334 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:28:40.622342 | orchestrator | 2026-03-19 02:28:40.622350 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 02:28:40.622358 | orchestrator | Thursday 19 March 2026 02:28:34 +0000 (0:00:02.273) 0:01:02.433 ******** 2026-03-19 02:28:40.622366 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:28:40.622376 | orchestrator | 2026-03-19 02:28:40.622384 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 02:28:40.622391 | orchestrator | Thursday 19 March 2026 02:28:36 +0000 (0:00:01.212) 0:01:03.646 ******** 2026-03-19 02:28:40.622399 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.622407 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.622414 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.622422 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.622430 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.622437 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.622445 | orchestrator | 2026-03-19 02:28:40.622453 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 02:28:40.622460 | orchestrator | Thursday 19 March 2026 02:28:36 +0000 (0:00:00.590) 0:01:04.236 ******** 2026-03-19 02:28:40.622468 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.622476 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.622483 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.622491 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.622499 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.622506 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.622514 | orchestrator | 2026-03-19 02:28:40.622522 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 02:28:40.622530 | orchestrator | Thursday 19 March 2026 02:28:37 +0000 (0:00:00.770) 0:01:05.007 ******** 2026-03-19 02:28:40.622538 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622550 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622563 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622571 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622578 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622586 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 02:28:40.622610 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622619 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622627 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622634 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622642 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622650 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 02:28:40.622658 | orchestrator | 2026-03-19 02:28:40.622666 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 02:28:40.622673 | orchestrator | Thursday 19 March 2026 02:28:38 +0000 (0:00:01.315) 0:01:06.322 ******** 2026-03-19 02:28:40.622681 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:28:40.622689 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:28:40.622697 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:28:40.622704 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:28:40.622712 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:28:40.622719 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:28:40.622727 | orchestrator | 2026-03-19 02:28:40.622735 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 02:28:40.622742 | orchestrator | Thursday 19 March 2026 02:28:39 +0000 (0:00:01.123) 0:01:07.446 ******** 2026-03-19 02:28:40.622750 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:28:40.622758 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:28:40.622766 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:28:40.622773 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:28:40.622781 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:28:40.622789 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:28:40.622796 | orchestrator | 2026-03-19 02:28:40.622809 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 02:29:48.915437 | orchestrator | Thursday 19 March 2026 02:28:40 +0000 (0:00:00.628) 0:01:08.075 ******** 2026-03-19 02:29:48.915569 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.915649 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.915664 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.915680 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.915689 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.915698 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.915707 | orchestrator | 2026-03-19 02:29:48.915717 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 02:29:48.915727 | orchestrator | Thursday 19 March 2026 02:28:41 +0000 (0:00:00.842) 0:01:08.917 ******** 2026-03-19 02:29:48.915736 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.915745 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.915754 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.915763 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.915774 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.915790 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.915804 | orchestrator | 2026-03-19 02:29:48.915818 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 02:29:48.915832 | orchestrator | Thursday 19 March 2026 02:28:42 +0000 (0:00:00.593) 0:01:09.510 ******** 2026-03-19 02:29:48.915878 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:29:48.915894 | orchestrator | 2026-03-19 02:29:48.915907 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 02:29:48.915920 | orchestrator | Thursday 19 March 2026 02:28:43 +0000 (0:00:01.260) 0:01:10.771 ******** 2026-03-19 02:29:48.915934 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:29:48.915949 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:29:48.915963 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:29:48.915977 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:29:48.915993 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:29:48.916009 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:29:48.916023 | orchestrator | 2026-03-19 02:29:48.916039 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 02:29:48.916054 | orchestrator | Thursday 19 March 2026 02:29:36 +0000 (0:00:52.846) 0:02:03.618 ******** 2026-03-19 02:29:48.916070 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916086 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916102 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916117 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916133 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916143 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916153 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916163 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916172 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916182 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916219 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916229 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916238 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916248 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916258 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.916268 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916278 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916289 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916299 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.916308 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 02:29:48.916318 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 02:29:48.916328 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 02:29:48.916338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.916347 | orchestrator | 2026-03-19 02:29:48.916356 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 02:29:48.916365 | orchestrator | Thursday 19 March 2026 02:29:36 +0000 (0:00:00.712) 0:02:04.331 ******** 2026-03-19 02:29:48.916373 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916382 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916391 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916399 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.916408 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.916425 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.916434 | orchestrator | 2026-03-19 02:29:48.916442 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 02:29:48.916451 | orchestrator | Thursday 19 March 2026 02:29:37 +0000 (0:00:00.761) 0:02:05.092 ******** 2026-03-19 02:29:48.916459 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916468 | orchestrator | 2026-03-19 02:29:48.916477 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 02:29:48.916485 | orchestrator | Thursday 19 March 2026 02:29:37 +0000 (0:00:00.148) 0:02:05.241 ******** 2026-03-19 02:29:48.916494 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916522 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916531 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916539 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.916548 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.916556 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.916565 | orchestrator | 2026-03-19 02:29:48.916599 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 02:29:48.916610 | orchestrator | Thursday 19 March 2026 02:29:38 +0000 (0:00:00.627) 0:02:05.868 ******** 2026-03-19 02:29:48.916619 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916627 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916636 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916644 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.916653 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.916661 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.916669 | orchestrator | 2026-03-19 02:29:48.916678 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 02:29:48.916687 | orchestrator | Thursday 19 March 2026 02:29:39 +0000 (0:00:00.806) 0:02:06.675 ******** 2026-03-19 02:29:48.916695 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916704 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916712 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916721 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.916729 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.916738 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.916746 | orchestrator | 2026-03-19 02:29:48.916755 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 02:29:48.916764 | orchestrator | Thursday 19 March 2026 02:29:39 +0000 (0:00:00.588) 0:02:07.264 ******** 2026-03-19 02:29:48.916772 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:29:48.916780 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:29:48.916789 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:29:48.916798 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:29:48.916806 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:29:48.916815 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:29:48.916823 | orchestrator | 2026-03-19 02:29:48.916832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 02:29:48.916841 | orchestrator | Thursday 19 March 2026 02:29:43 +0000 (0:00:03.522) 0:02:10.787 ******** 2026-03-19 02:29:48.916849 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:29:48.916857 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:29:48.916866 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:29:48.916874 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:29:48.916883 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:29:48.916891 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:29:48.916900 | orchestrator | 2026-03-19 02:29:48.916909 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 02:29:48.916917 | orchestrator | Thursday 19 March 2026 02:29:43 +0000 (0:00:00.601) 0:02:11.388 ******** 2026-03-19 02:29:48.916927 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:29:48.916937 | orchestrator | 2026-03-19 02:29:48.916946 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 02:29:48.916962 | orchestrator | Thursday 19 March 2026 02:29:45 +0000 (0:00:01.263) 0:02:12.651 ******** 2026-03-19 02:29:48.916970 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.916979 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.916987 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.916996 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.917010 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.917018 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.917027 | orchestrator | 2026-03-19 02:29:48.917036 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 02:29:48.917044 | orchestrator | Thursday 19 March 2026 02:29:45 +0000 (0:00:00.809) 0:02:13.461 ******** 2026-03-19 02:29:48.917053 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.917061 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.917069 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.917078 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.917086 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.917095 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.917103 | orchestrator | 2026-03-19 02:29:48.917112 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 02:29:48.917121 | orchestrator | Thursday 19 March 2026 02:29:46 +0000 (0:00:00.596) 0:02:14.058 ******** 2026-03-19 02:29:48.917129 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.917138 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.917146 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.917155 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.917163 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.917171 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.917180 | orchestrator | 2026-03-19 02:29:48.917189 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 02:29:48.917197 | orchestrator | Thursday 19 March 2026 02:29:47 +0000 (0:00:00.829) 0:02:14.887 ******** 2026-03-19 02:29:48.917206 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.917214 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.917223 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.917231 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.917239 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.917254 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.917268 | orchestrator | 2026-03-19 02:29:48.917283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 02:29:48.917298 | orchestrator | Thursday 19 March 2026 02:29:48 +0000 (0:00:00.607) 0:02:15.495 ******** 2026-03-19 02:29:48.917311 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:29:48.917324 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:29:48.917338 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:29:48.917353 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:29:48.917367 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:29:48.917376 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:29:48.917384 | orchestrator | 2026-03-19 02:29:48.917392 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 02:29:48.917409 | orchestrator | Thursday 19 March 2026 02:29:48 +0000 (0:00:00.865) 0:02:16.360 ******** 2026-03-19 02:30:00.996680 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:00.996822 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:00.996835 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:00.996844 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:00.996851 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:00.996858 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:00.996865 | orchestrator | 2026-03-19 02:30:00.996873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 02:30:00.996882 | orchestrator | Thursday 19 March 2026 02:29:49 +0000 (0:00:00.632) 0:02:16.992 ******** 2026-03-19 02:30:00.996914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:00.996922 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:00.996929 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:00.996935 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:00.996942 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:00.996949 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:00.996955 | orchestrator | 2026-03-19 02:30:00.996962 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 02:30:00.996969 | orchestrator | Thursday 19 March 2026 02:29:50 +0000 (0:00:00.852) 0:02:17.844 ******** 2026-03-19 02:30:00.996976 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:00.996984 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:00.996990 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:00.996998 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:00.997005 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:00.997011 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:00.997018 | orchestrator | 2026-03-19 02:30:00.997025 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 02:30:00.997032 | orchestrator | Thursday 19 March 2026 02:29:51 +0000 (0:00:00.788) 0:02:18.633 ******** 2026-03-19 02:30:00.997039 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:00.997048 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:00.997054 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:00.997061 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:30:00.997067 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:30:00.997075 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:30:00.997082 | orchestrator | 2026-03-19 02:30:00.997089 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 02:30:00.997096 | orchestrator | Thursday 19 March 2026 02:29:52 +0000 (0:00:01.390) 0:02:20.024 ******** 2026-03-19 02:30:00.997105 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:30:00.997114 | orchestrator | 2026-03-19 02:30:00.997122 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 02:30:00.997128 | orchestrator | Thursday 19 March 2026 02:29:53 +0000 (0:00:01.316) 0:02:21.340 ******** 2026-03-19 02:30:00.997136 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-19 02:30:00.997144 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-19 02:30:00.997152 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997159 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-19 02:30:00.997166 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-19 02:30:00.997173 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-19 02:30:00.997180 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997205 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997212 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997220 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-19 02:30:00.997227 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997233 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997246 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997252 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997258 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-19 02:30:00.997265 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997272 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997278 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997293 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997300 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997306 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-19 02:30:00.997313 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997320 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997332 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997339 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-19 02:30:00.997353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997374 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997381 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-19 02:30:00.997394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997425 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997441 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997447 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997451 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-19 02:30:00.997455 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997460 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997469 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997474 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997478 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-19 02:30:00.997482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997502 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997509 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-19 02:30:00.997515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997536 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997543 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997556 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 02:30:00.997563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997628 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997645 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997657 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 02:30:00.997661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997666 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997670 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997686 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997690 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 02:30:00.997695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997699 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997712 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997716 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 02:30:00.997721 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997725 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997741 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997752 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-19 02:30:00.997759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997773 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 02:30:00.997780 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-19 02:30:00.997787 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-19 02:30:00.997793 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-19 02:30:00.997799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 02:30:00.997807 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-19 02:30:00.997813 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-19 02:30:00.997821 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-19 02:30:00.997837 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-19 02:30:15.543032 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-19 02:30:15.543149 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-19 02:30:15.543158 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-19 02:30:15.543165 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-19 02:30:15.543171 | orchestrator | 2026-03-19 02:30:15.543177 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 02:30:15.543185 | orchestrator | Thursday 19 March 2026 02:30:00 +0000 (0:00:07.071) 0:02:28.412 ******** 2026-03-19 02:30:15.543191 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543198 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543203 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543209 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:30:15.543233 | orchestrator | 2026-03-19 02:30:15.543239 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 02:30:15.543244 | orchestrator | Thursday 19 March 2026 02:30:01 +0000 (0:00:01.043) 0:02:29.455 ******** 2026-03-19 02:30:15.543249 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543256 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543261 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543266 | orchestrator | 2026-03-19 02:30:15.543271 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 02:30:15.543276 | orchestrator | Thursday 19 March 2026 02:30:02 +0000 (0:00:00.720) 0:02:30.176 ******** 2026-03-19 02:30:15.543282 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543287 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543293 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543298 | orchestrator | 2026-03-19 02:30:15.543303 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 02:30:15.543308 | orchestrator | Thursday 19 March 2026 02:30:03 +0000 (0:00:01.255) 0:02:31.431 ******** 2026-03-19 02:30:15.543313 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:15.543318 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:15.543323 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:15.543328 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543333 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543343 | orchestrator | 2026-03-19 02:30:15.543348 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 02:30:15.543365 | orchestrator | Thursday 19 March 2026 02:30:04 +0000 (0:00:00.821) 0:02:32.252 ******** 2026-03-19 02:30:15.543370 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:15.543375 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:15.543380 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:15.543385 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543390 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543395 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543400 | orchestrator | 2026-03-19 02:30:15.543405 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 02:30:15.543411 | orchestrator | Thursday 19 March 2026 02:30:05 +0000 (0:00:00.622) 0:02:32.875 ******** 2026-03-19 02:30:15.543415 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543420 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543426 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543431 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543436 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543441 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543446 | orchestrator | 2026-03-19 02:30:15.543451 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 02:30:15.543457 | orchestrator | Thursday 19 March 2026 02:30:06 +0000 (0:00:00.830) 0:02:33.705 ******** 2026-03-19 02:30:15.543462 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543467 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543472 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543477 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543482 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543487 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543499 | orchestrator | 2026-03-19 02:30:15.543504 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 02:30:15.543509 | orchestrator | Thursday 19 March 2026 02:30:06 +0000 (0:00:00.592) 0:02:34.298 ******** 2026-03-19 02:30:15.543514 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543519 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543524 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543529 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543539 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543544 | orchestrator | 2026-03-19 02:30:15.543550 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 02:30:15.543555 | orchestrator | Thursday 19 March 2026 02:30:07 +0000 (0:00:00.794) 0:02:35.093 ******** 2026-03-19 02:30:15.543561 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543592 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543598 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543604 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543623 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543629 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543635 | orchestrator | 2026-03-19 02:30:15.543641 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 02:30:15.543647 | orchestrator | Thursday 19 March 2026 02:30:08 +0000 (0:00:00.584) 0:02:35.678 ******** 2026-03-19 02:30:15.543652 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543658 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543664 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543670 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543675 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543681 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543687 | orchestrator | 2026-03-19 02:30:15.543693 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 02:30:15.543699 | orchestrator | Thursday 19 March 2026 02:30:09 +0000 (0:00:00.831) 0:02:36.510 ******** 2026-03-19 02:30:15.543705 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543711 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543717 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543723 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543728 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543734 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543740 | orchestrator | 2026-03-19 02:30:15.543746 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 02:30:15.543752 | orchestrator | Thursday 19 March 2026 02:30:09 +0000 (0:00:00.588) 0:02:37.098 ******** 2026-03-19 02:30:15.543758 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543764 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543769 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543775 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:15.543781 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:15.543787 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:15.543793 | orchestrator | 2026-03-19 02:30:15.543799 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 02:30:15.543805 | orchestrator | Thursday 19 March 2026 02:30:12 +0000 (0:00:02.784) 0:02:39.882 ******** 2026-03-19 02:30:15.543811 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:15.543817 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:15.543823 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:15.543828 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543833 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543838 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543843 | orchestrator | 2026-03-19 02:30:15.543848 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 02:30:15.543863 | orchestrator | Thursday 19 March 2026 02:30:13 +0000 (0:00:00.595) 0:02:40.478 ******** 2026-03-19 02:30:15.543868 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:15.543873 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:15.543878 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:15.543883 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543888 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543893 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543898 | orchestrator | 2026-03-19 02:30:15.543903 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 02:30:15.543908 | orchestrator | Thursday 19 March 2026 02:30:13 +0000 (0:00:00.825) 0:02:41.303 ******** 2026-03-19 02:30:15.543913 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:15.543918 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:15.543927 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:15.543932 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543938 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543943 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543948 | orchestrator | 2026-03-19 02:30:15.543953 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 02:30:15.543958 | orchestrator | Thursday 19 March 2026 02:30:14 +0000 (0:00:00.615) 0:02:41.918 ******** 2026-03-19 02:30:15.543963 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543968 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543973 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 02:30:15.543978 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:15.543984 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:15.543989 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:15.543994 | orchestrator | 2026-03-19 02:30:15.543999 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 02:30:15.544004 | orchestrator | Thursday 19 March 2026 02:30:15 +0000 (0:00:00.860) 0:02:42.779 ******** 2026-03-19 02:30:15.544011 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-19 02:30:15.544020 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-19 02:30:15.544031 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-19 02:30:32.933286 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-19 02:30:32.933391 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933400 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-19 02:30:32.933425 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-19 02:30:32.933430 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933434 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933438 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933442 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933445 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933449 | orchestrator | 2026-03-19 02:30:32.933455 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 02:30:32.933460 | orchestrator | Thursday 19 March 2026 02:30:15 +0000 (0:00:00.651) 0:02:43.431 ******** 2026-03-19 02:30:32.933464 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933468 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933472 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933476 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933484 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933488 | orchestrator | 2026-03-19 02:30:32.933492 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 02:30:32.933496 | orchestrator | Thursday 19 March 2026 02:30:16 +0000 (0:00:00.824) 0:02:44.255 ******** 2026-03-19 02:30:32.933499 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933503 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933507 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933511 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933514 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933518 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933522 | orchestrator | 2026-03-19 02:30:32.933527 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 02:30:32.933533 | orchestrator | Thursday 19 March 2026 02:30:17 +0000 (0:00:00.593) 0:02:44.849 ******** 2026-03-19 02:30:32.933549 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933553 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933557 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933632 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933637 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933641 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933645 | orchestrator | 2026-03-19 02:30:32.933649 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 02:30:32.933653 | orchestrator | Thursday 19 March 2026 02:30:18 +0000 (0:00:00.901) 0:02:45.750 ******** 2026-03-19 02:30:32.933656 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933660 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933664 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933668 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933671 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933675 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933679 | orchestrator | 2026-03-19 02:30:32.933682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 02:30:32.933687 | orchestrator | Thursday 19 March 2026 02:30:19 +0000 (0:00:00.885) 0:02:46.636 ******** 2026-03-19 02:30:32.933690 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933694 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.933698 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.933701 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933705 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933709 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933718 | orchestrator | 2026-03-19 02:30:32.933722 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 02:30:32.933726 | orchestrator | Thursday 19 March 2026 02:30:19 +0000 (0:00:00.670) 0:02:47.306 ******** 2026-03-19 02:30:32.933730 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:32.933734 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:32.933738 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933742 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:32.933745 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933749 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933753 | orchestrator | 2026-03-19 02:30:32.933757 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 02:30:32.933760 | orchestrator | Thursday 19 March 2026 02:30:20 +0000 (0:00:00.895) 0:02:48.201 ******** 2026-03-19 02:30:32.933764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:32.933768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:32.933772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:32.933776 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933780 | orchestrator | 2026-03-19 02:30:32.933784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 02:30:32.933800 | orchestrator | Thursday 19 March 2026 02:30:21 +0000 (0:00:00.454) 0:02:48.655 ******** 2026-03-19 02:30:32.933804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:32.933809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:32.933812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:32.933817 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933821 | orchestrator | 2026-03-19 02:30:32.933825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 02:30:32.933830 | orchestrator | Thursday 19 March 2026 02:30:21 +0000 (0:00:00.449) 0:02:49.105 ******** 2026-03-19 02:30:32.933834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:32.933839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:32.933843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:32.933848 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.933852 | orchestrator | 2026-03-19 02:30:32.933857 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 02:30:32.933861 | orchestrator | Thursday 19 March 2026 02:30:22 +0000 (0:00:00.456) 0:02:49.561 ******** 2026-03-19 02:30:32.933865 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:32.933869 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:32.933873 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:32.933877 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933882 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933886 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933890 | orchestrator | 2026-03-19 02:30:32.933894 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 02:30:32.933899 | orchestrator | Thursday 19 March 2026 02:30:22 +0000 (0:00:00.628) 0:02:50.189 ******** 2026-03-19 02:30:32.933903 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 02:30:32.933907 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 02:30:32.933911 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 02:30:32.933916 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-19 02:30:32.933920 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.933924 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-19 02:30:32.933929 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:32.933933 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-19 02:30:32.933937 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:32.933942 | orchestrator | 2026-03-19 02:30:32.933946 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 02:30:32.933954 | orchestrator | Thursday 19 March 2026 02:30:24 +0000 (0:00:01.880) 0:02:52.070 ******** 2026-03-19 02:30:32.933958 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:30:32.933963 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:30:32.933967 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:30:32.933971 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:30:32.933976 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:30:32.933980 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:30:32.933984 | orchestrator | 2026-03-19 02:30:32.933989 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:30:32.933993 | orchestrator | Thursday 19 March 2026 02:30:27 +0000 (0:00:02.755) 0:02:54.825 ******** 2026-03-19 02:30:32.933997 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:30:32.934005 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:30:32.934009 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:30:32.934052 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:30:32.934057 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:30:32.934061 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:30:32.934065 | orchestrator | 2026-03-19 02:30:32.934069 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-19 02:30:32.934074 | orchestrator | Thursday 19 March 2026 02:30:28 +0000 (0:00:01.054) 0:02:55.879 ******** 2026-03-19 02:30:32.934078 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:32.934082 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:32.934087 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:32.934092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:30:32.934096 | orchestrator | 2026-03-19 02:30:32.934100 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-19 02:30:32.934105 | orchestrator | Thursday 19 March 2026 02:30:29 +0000 (0:00:01.121) 0:02:57.001 ******** 2026-03-19 02:30:32.934109 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:30:32.934113 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:30:32.934118 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:30:32.934122 | orchestrator | 2026-03-19 02:30:32.934126 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-19 02:30:32.934130 | orchestrator | Thursday 19 March 2026 02:30:29 +0000 (0:00:00.348) 0:02:57.349 ******** 2026-03-19 02:30:32.934135 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:30:32.934139 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:30:32.934143 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:30:32.934148 | orchestrator | 2026-03-19 02:30:32.934152 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-19 02:30:32.934156 | orchestrator | Thursday 19 March 2026 02:30:31 +0000 (0:00:01.484) 0:02:58.834 ******** 2026-03-19 02:30:32.934161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 02:30:32.934165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 02:30:32.934170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 02:30:32.934174 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:32.934178 | orchestrator | 2026-03-19 02:30:32.934183 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-19 02:30:32.934187 | orchestrator | Thursday 19 March 2026 02:30:32 +0000 (0:00:00.704) 0:02:59.538 ******** 2026-03-19 02:30:32.934191 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:30:32.934194 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:30:32.934198 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:30:32.934202 | orchestrator | 2026-03-19 02:30:32.934206 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-19 02:30:32.934210 | orchestrator | Thursday 19 March 2026 02:30:32 +0000 (0:00:00.335) 0:02:59.874 ******** 2026-03-19 02:30:32.934217 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:50.641246 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:50.641377 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:50.641414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:30:50.641424 | orchestrator | 2026-03-19 02:30:50.641433 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-19 02:30:50.641442 | orchestrator | Thursday 19 March 2026 02:30:33 +0000 (0:00:01.107) 0:03:00.981 ******** 2026-03-19 02:30:50.641450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:50.641458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:50.641466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:50.641473 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641480 | orchestrator | 2026-03-19 02:30:50.641487 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-19 02:30:50.641495 | orchestrator | Thursday 19 March 2026 02:30:33 +0000 (0:00:00.456) 0:03:01.438 ******** 2026-03-19 02:30:50.641502 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641509 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:50.641516 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:50.641524 | orchestrator | 2026-03-19 02:30:50.641531 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-19 02:30:50.641538 | orchestrator | Thursday 19 March 2026 02:30:34 +0000 (0:00:00.336) 0:03:01.774 ******** 2026-03-19 02:30:50.641545 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641552 | orchestrator | 2026-03-19 02:30:50.641606 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-19 02:30:50.641614 | orchestrator | Thursday 19 March 2026 02:30:34 +0000 (0:00:00.238) 0:03:02.013 ******** 2026-03-19 02:30:50.641621 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641628 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:50.641636 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:50.641643 | orchestrator | 2026-03-19 02:30:50.641650 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-19 02:30:50.641658 | orchestrator | Thursday 19 March 2026 02:30:34 +0000 (0:00:00.344) 0:03:02.358 ******** 2026-03-19 02:30:50.641665 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641672 | orchestrator | 2026-03-19 02:30:50.641679 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-19 02:30:50.641687 | orchestrator | Thursday 19 March 2026 02:30:35 +0000 (0:00:00.690) 0:03:03.048 ******** 2026-03-19 02:30:50.641694 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641701 | orchestrator | 2026-03-19 02:30:50.641708 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-19 02:30:50.641715 | orchestrator | Thursday 19 March 2026 02:30:35 +0000 (0:00:00.234) 0:03:03.282 ******** 2026-03-19 02:30:50.641724 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641732 | orchestrator | 2026-03-19 02:30:50.641740 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-19 02:30:50.641749 | orchestrator | Thursday 19 March 2026 02:30:35 +0000 (0:00:00.155) 0:03:03.438 ******** 2026-03-19 02:30:50.641771 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641779 | orchestrator | 2026-03-19 02:30:50.641787 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-19 02:30:50.641796 | orchestrator | Thursday 19 March 2026 02:30:36 +0000 (0:00:00.239) 0:03:03.678 ******** 2026-03-19 02:30:50.641804 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641813 | orchestrator | 2026-03-19 02:30:50.641821 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-19 02:30:50.641830 | orchestrator | Thursday 19 March 2026 02:30:36 +0000 (0:00:00.245) 0:03:03.923 ******** 2026-03-19 02:30:50.641838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:50.641847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:50.641854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:50.641870 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641878 | orchestrator | 2026-03-19 02:30:50.641887 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-19 02:30:50.641895 | orchestrator | Thursday 19 March 2026 02:30:36 +0000 (0:00:00.424) 0:03:04.348 ******** 2026-03-19 02:30:50.641904 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641912 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:50.641920 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:50.641928 | orchestrator | 2026-03-19 02:30:50.641936 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-19 02:30:50.641944 | orchestrator | Thursday 19 March 2026 02:30:37 +0000 (0:00:00.332) 0:03:04.681 ******** 2026-03-19 02:30:50.641952 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641960 | orchestrator | 2026-03-19 02:30:50.641968 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-19 02:30:50.641977 | orchestrator | Thursday 19 March 2026 02:30:37 +0000 (0:00:00.247) 0:03:04.928 ******** 2026-03-19 02:30:50.641985 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.641993 | orchestrator | 2026-03-19 02:30:50.642000 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-19 02:30:50.642009 | orchestrator | Thursday 19 March 2026 02:30:37 +0000 (0:00:00.221) 0:03:05.150 ******** 2026-03-19 02:30:50.642073 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:50.642081 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:50.642090 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:50.642099 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:30:50.642108 | orchestrator | 2026-03-19 02:30:50.642116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-19 02:30:50.642158 | orchestrator | Thursday 19 March 2026 02:30:38 +0000 (0:00:01.063) 0:03:06.213 ******** 2026-03-19 02:30:50.642168 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:50.642182 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:50.642195 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:50.642206 | orchestrator | 2026-03-19 02:30:50.642241 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-19 02:30:50.642254 | orchestrator | Thursday 19 March 2026 02:30:39 +0000 (0:00:00.336) 0:03:06.549 ******** 2026-03-19 02:30:50.642266 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:30:50.642278 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:30:50.642289 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:30:50.642301 | orchestrator | 2026-03-19 02:30:50.642313 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-19 02:30:50.642325 | orchestrator | Thursday 19 March 2026 02:30:40 +0000 (0:00:01.677) 0:03:08.227 ******** 2026-03-19 02:30:50.642338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:50.642349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:50.642361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:50.642373 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.642385 | orchestrator | 2026-03-19 02:30:50.642398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-19 02:30:50.642410 | orchestrator | Thursday 19 March 2026 02:30:41 +0000 (0:00:00.912) 0:03:09.140 ******** 2026-03-19 02:30:50.642422 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:50.642435 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:50.642443 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:50.642450 | orchestrator | 2026-03-19 02:30:50.642457 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-19 02:30:50.642464 | orchestrator | Thursday 19 March 2026 02:30:42 +0000 (0:00:00.370) 0:03:09.510 ******** 2026-03-19 02:30:50.642471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:50.642478 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:50.642485 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:50.642502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:30:50.642509 | orchestrator | 2026-03-19 02:30:50.642516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-19 02:30:50.642524 | orchestrator | Thursday 19 March 2026 02:30:43 +0000 (0:00:01.072) 0:03:10.583 ******** 2026-03-19 02:30:50.642531 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:50.642538 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:50.642545 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:50.642552 | orchestrator | 2026-03-19 02:30:50.642595 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-19 02:30:50.642603 | orchestrator | Thursday 19 March 2026 02:30:43 +0000 (0:00:00.332) 0:03:10.916 ******** 2026-03-19 02:30:50.642610 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:30:50.642617 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:30:50.642624 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:30:50.642632 | orchestrator | 2026-03-19 02:30:50.642639 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-19 02:30:50.642646 | orchestrator | Thursday 19 March 2026 02:30:44 +0000 (0:00:01.296) 0:03:12.212 ******** 2026-03-19 02:30:50.642653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:30:50.642660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:30:50.642675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:30:50.642682 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.642689 | orchestrator | 2026-03-19 02:30:50.642697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-19 02:30:50.642704 | orchestrator | Thursday 19 March 2026 02:30:45 +0000 (0:00:00.874) 0:03:13.087 ******** 2026-03-19 02:30:50.642711 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:30:50.642718 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:30:50.642725 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:30:50.642737 | orchestrator | 2026-03-19 02:30:50.642748 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-19 02:30:50.642765 | orchestrator | Thursday 19 March 2026 02:30:46 +0000 (0:00:00.546) 0:03:13.633 ******** 2026-03-19 02:30:50.642779 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.642791 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:50.642803 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:50.642815 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:30:50.642827 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:30:50.642837 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:30:50.642848 | orchestrator | 2026-03-19 02:30:50.642860 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-19 02:30:50.642871 | orchestrator | Thursday 19 March 2026 02:30:46 +0000 (0:00:00.623) 0:03:14.256 ******** 2026-03-19 02:30:50.642883 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:30:50.642894 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:30:50.642906 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:30:50.642918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:30:50.642930 | orchestrator | 2026-03-19 02:30:50.642943 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-19 02:30:50.642955 | orchestrator | Thursday 19 March 2026 02:30:47 +0000 (0:00:01.136) 0:03:15.393 ******** 2026-03-19 02:30:50.642968 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:30:50.642981 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:30:50.642994 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:30:50.643006 | orchestrator | 2026-03-19 02:30:50.643019 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-19 02:30:50.643031 | orchestrator | Thursday 19 March 2026 02:30:48 +0000 (0:00:00.346) 0:03:15.740 ******** 2026-03-19 02:30:50.643044 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:30:50.643066 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:30:50.643079 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:30:50.643092 | orchestrator | 2026-03-19 02:30:50.643105 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-19 02:30:50.643118 | orchestrator | Thursday 19 March 2026 02:30:49 +0000 (0:00:01.223) 0:03:16.963 ******** 2026-03-19 02:30:50.643130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 02:30:50.643152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 02:31:07.588699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 02:31:07.588833 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.588853 | orchestrator | 2026-03-19 02:31:07.588862 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-19 02:31:07.588870 | orchestrator | Thursday 19 March 2026 02:30:50 +0000 (0:00:01.122) 0:03:18.086 ******** 2026-03-19 02:31:07.588877 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.588884 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.588890 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.588897 | orchestrator | 2026-03-19 02:31:07.588903 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-19 02:31:07.588909 | orchestrator | 2026-03-19 02:31:07.588916 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:31:07.588923 | orchestrator | Thursday 19 March 2026 02:30:51 +0000 (0:00:00.608) 0:03:18.694 ******** 2026-03-19 02:31:07.588930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:31:07.588938 | orchestrator | 2026-03-19 02:31:07.588944 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:31:07.588950 | orchestrator | Thursday 19 March 2026 02:30:52 +0000 (0:00:00.815) 0:03:19.510 ******** 2026-03-19 02:31:07.588957 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:31:07.588963 | orchestrator | 2026-03-19 02:31:07.588969 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:31:07.588975 | orchestrator | Thursday 19 March 2026 02:30:52 +0000 (0:00:00.546) 0:03:20.056 ******** 2026-03-19 02:31:07.588981 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.588987 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.588994 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589000 | orchestrator | 2026-03-19 02:31:07.589006 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:31:07.589012 | orchestrator | Thursday 19 March 2026 02:30:53 +0000 (0:00:00.727) 0:03:20.784 ******** 2026-03-19 02:31:07.589018 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589025 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589031 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589037 | orchestrator | 2026-03-19 02:31:07.589043 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:31:07.589049 | orchestrator | Thursday 19 March 2026 02:30:53 +0000 (0:00:00.547) 0:03:21.332 ******** 2026-03-19 02:31:07.589056 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589062 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589068 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589074 | orchestrator | 2026-03-19 02:31:07.589081 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:31:07.589087 | orchestrator | Thursday 19 March 2026 02:30:54 +0000 (0:00:00.368) 0:03:21.700 ******** 2026-03-19 02:31:07.589093 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589099 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589106 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589112 | orchestrator | 2026-03-19 02:31:07.589133 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:31:07.589139 | orchestrator | Thursday 19 March 2026 02:30:54 +0000 (0:00:00.345) 0:03:22.046 ******** 2026-03-19 02:31:07.589167 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589174 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589180 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589186 | orchestrator | 2026-03-19 02:31:07.589193 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:31:07.589202 | orchestrator | Thursday 19 March 2026 02:30:55 +0000 (0:00:00.775) 0:03:22.821 ******** 2026-03-19 02:31:07.589210 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589217 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589224 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589231 | orchestrator | 2026-03-19 02:31:07.589238 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:31:07.589246 | orchestrator | Thursday 19 March 2026 02:30:55 +0000 (0:00:00.577) 0:03:23.399 ******** 2026-03-19 02:31:07.589253 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589260 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589267 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589274 | orchestrator | 2026-03-19 02:31:07.589281 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:31:07.589288 | orchestrator | Thursday 19 March 2026 02:30:56 +0000 (0:00:00.360) 0:03:23.760 ******** 2026-03-19 02:31:07.589295 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589303 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589310 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589316 | orchestrator | 2026-03-19 02:31:07.589323 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:31:07.589330 | orchestrator | Thursday 19 March 2026 02:30:57 +0000 (0:00:00.707) 0:03:24.467 ******** 2026-03-19 02:31:07.589338 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589344 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589351 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589358 | orchestrator | 2026-03-19 02:31:07.589366 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:31:07.589373 | orchestrator | Thursday 19 March 2026 02:30:57 +0000 (0:00:00.737) 0:03:25.204 ******** 2026-03-19 02:31:07.589380 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589387 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589395 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589402 | orchestrator | 2026-03-19 02:31:07.589409 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:31:07.589416 | orchestrator | Thursday 19 March 2026 02:30:58 +0000 (0:00:00.597) 0:03:25.802 ******** 2026-03-19 02:31:07.589422 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589429 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589435 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589441 | orchestrator | 2026-03-19 02:31:07.589447 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:31:07.589466 | orchestrator | Thursday 19 March 2026 02:30:58 +0000 (0:00:00.351) 0:03:26.153 ******** 2026-03-19 02:31:07.589473 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589479 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589485 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589491 | orchestrator | 2026-03-19 02:31:07.589497 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:31:07.589503 | orchestrator | Thursday 19 March 2026 02:30:59 +0000 (0:00:00.331) 0:03:26.485 ******** 2026-03-19 02:31:07.589509 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589516 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589522 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589528 | orchestrator | 2026-03-19 02:31:07.589534 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:31:07.589540 | orchestrator | Thursday 19 March 2026 02:30:59 +0000 (0:00:00.378) 0:03:26.863 ******** 2026-03-19 02:31:07.589546 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589588 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589600 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589607 | orchestrator | 2026-03-19 02:31:07.589613 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:31:07.589619 | orchestrator | Thursday 19 March 2026 02:30:59 +0000 (0:00:00.594) 0:03:27.458 ******** 2026-03-19 02:31:07.589626 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589632 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589638 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589644 | orchestrator | 2026-03-19 02:31:07.589650 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:31:07.589656 | orchestrator | Thursday 19 March 2026 02:31:00 +0000 (0:00:00.345) 0:03:27.804 ******** 2026-03-19 02:31:07.589662 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589668 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:31:07.589674 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:31:07.589680 | orchestrator | 2026-03-19 02:31:07.589686 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:31:07.589692 | orchestrator | Thursday 19 March 2026 02:31:00 +0000 (0:00:00.328) 0:03:28.133 ******** 2026-03-19 02:31:07.589699 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589705 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589711 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589717 | orchestrator | 2026-03-19 02:31:07.589723 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:31:07.589729 | orchestrator | Thursday 19 March 2026 02:31:00 +0000 (0:00:00.319) 0:03:28.452 ******** 2026-03-19 02:31:07.589735 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589744 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589754 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589764 | orchestrator | 2026-03-19 02:31:07.589774 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:31:07.589784 | orchestrator | Thursday 19 March 2026 02:31:01 +0000 (0:00:00.580) 0:03:29.032 ******** 2026-03-19 02:31:07.589794 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589804 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589815 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589825 | orchestrator | 2026-03-19 02:31:07.589841 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-19 02:31:07.589852 | orchestrator | Thursday 19 March 2026 02:31:02 +0000 (0:00:00.571) 0:03:29.604 ******** 2026-03-19 02:31:07.589862 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589872 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.589882 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.589891 | orchestrator | 2026-03-19 02:31:07.589897 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-19 02:31:07.589903 | orchestrator | Thursday 19 March 2026 02:31:02 +0000 (0:00:00.341) 0:03:29.946 ******** 2026-03-19 02:31:07.589910 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:31:07.589916 | orchestrator | 2026-03-19 02:31:07.589923 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-19 02:31:07.589929 | orchestrator | Thursday 19 March 2026 02:31:03 +0000 (0:00:00.956) 0:03:30.902 ******** 2026-03-19 02:31:07.589935 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:31:07.589941 | orchestrator | 2026-03-19 02:31:07.589947 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-19 02:31:07.589953 | orchestrator | Thursday 19 March 2026 02:31:03 +0000 (0:00:00.158) 0:03:31.061 ******** 2026-03-19 02:31:07.589959 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 02:31:07.589966 | orchestrator | 2026-03-19 02:31:07.589972 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-19 02:31:07.589978 | orchestrator | Thursday 19 March 2026 02:31:04 +0000 (0:00:01.013) 0:03:32.075 ******** 2026-03-19 02:31:07.589991 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.589997 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.590003 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.590009 | orchestrator | 2026-03-19 02:31:07.590061 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-19 02:31:07.590068 | orchestrator | Thursday 19 March 2026 02:31:04 +0000 (0:00:00.347) 0:03:32.422 ******** 2026-03-19 02:31:07.590075 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:31:07.590081 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:31:07.590087 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:31:07.590093 | orchestrator | 2026-03-19 02:31:07.590099 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-19 02:31:07.590105 | orchestrator | Thursday 19 March 2026 02:31:05 +0000 (0:00:00.627) 0:03:33.050 ******** 2026-03-19 02:31:07.590112 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:31:07.590118 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:31:07.590124 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:31:07.590130 | orchestrator | 2026-03-19 02:31:07.590137 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-19 02:31:07.590143 | orchestrator | Thursday 19 March 2026 02:31:06 +0000 (0:00:01.221) 0:03:34.271 ******** 2026-03-19 02:31:07.590149 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:31:07.590155 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:31:07.590169 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:31:07.590176 | orchestrator | 2026-03-19 02:31:07.590189 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-19 02:32:18.000983 | orchestrator | Thursday 19 March 2026 02:31:07 +0000 (0:00:00.764) 0:03:35.036 ******** 2026-03-19 02:32:18.001073 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001086 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001094 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001102 | orchestrator | 2026-03-19 02:32:18.001111 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-19 02:32:18.001119 | orchestrator | Thursday 19 March 2026 02:31:08 +0000 (0:00:00.698) 0:03:35.734 ******** 2026-03-19 02:32:18.001127 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001137 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:18.001144 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:18.001151 | orchestrator | 2026-03-19 02:32:18.001156 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-19 02:32:18.001160 | orchestrator | Thursday 19 March 2026 02:31:09 +0000 (0:00:01.060) 0:03:36.794 ******** 2026-03-19 02:32:18.001165 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001170 | orchestrator | 2026-03-19 02:32:18.001174 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-19 02:32:18.001179 | orchestrator | Thursday 19 March 2026 02:31:10 +0000 (0:00:01.320) 0:03:38.115 ******** 2026-03-19 02:32:18.001183 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001187 | orchestrator | 2026-03-19 02:32:18.001192 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-19 02:32:18.001197 | orchestrator | Thursday 19 March 2026 02:31:11 +0000 (0:00:00.730) 0:03:38.846 ******** 2026-03-19 02:32:18.001201 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:32:18.001206 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:32:18.001211 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:32:18.001215 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:32:18.001220 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-19 02:32:18.001225 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:32:18.001229 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:32:18.001234 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-19 02:32:18.001238 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:32:18.001263 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-19 02:32:18.001268 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-19 02:32:18.001272 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-19 02:32:18.001277 | orchestrator | 2026-03-19 02:32:18.001281 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-19 02:32:18.001285 | orchestrator | Thursday 19 March 2026 02:31:14 +0000 (0:00:03.188) 0:03:42.034 ******** 2026-03-19 02:32:18.001290 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001294 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001310 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001314 | orchestrator | 2026-03-19 02:32:18.001319 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-19 02:32:18.001323 | orchestrator | Thursday 19 March 2026 02:31:15 +0000 (0:00:01.105) 0:03:43.140 ******** 2026-03-19 02:32:18.001328 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001332 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:18.001336 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:18.001341 | orchestrator | 2026-03-19 02:32:18.001345 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-19 02:32:18.001349 | orchestrator | Thursday 19 March 2026 02:31:16 +0000 (0:00:00.640) 0:03:43.780 ******** 2026-03-19 02:32:18.001354 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001358 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:18.001362 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:18.001367 | orchestrator | 2026-03-19 02:32:18.001371 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-19 02:32:18.001375 | orchestrator | Thursday 19 March 2026 02:31:16 +0000 (0:00:00.342) 0:03:44.122 ******** 2026-03-19 02:32:18.001380 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001384 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001388 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001393 | orchestrator | 2026-03-19 02:32:18.001397 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-19 02:32:18.001401 | orchestrator | Thursday 19 March 2026 02:31:18 +0000 (0:00:01.453) 0:03:45.576 ******** 2026-03-19 02:32:18.001406 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001410 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001414 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001419 | orchestrator | 2026-03-19 02:32:18.001423 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-19 02:32:18.001427 | orchestrator | Thursday 19 March 2026 02:31:19 +0000 (0:00:01.302) 0:03:46.878 ******** 2026-03-19 02:32:18.001432 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:18.001436 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:18.001440 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:18.001445 | orchestrator | 2026-03-19 02:32:18.001449 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-19 02:32:18.001453 | orchestrator | Thursday 19 March 2026 02:31:19 +0000 (0:00:00.564) 0:03:47.443 ******** 2026-03-19 02:32:18.001458 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:18.001463 | orchestrator | 2026-03-19 02:32:18.001468 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-19 02:32:18.001472 | orchestrator | Thursday 19 March 2026 02:31:20 +0000 (0:00:00.568) 0:03:48.011 ******** 2026-03-19 02:32:18.001476 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:18.001481 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:18.001485 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:18.001490 | orchestrator | 2026-03-19 02:32:18.001494 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-19 02:32:18.001511 | orchestrator | Thursday 19 March 2026 02:31:20 +0000 (0:00:00.306) 0:03:48.317 ******** 2026-03-19 02:32:18.001515 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:18.001527 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:18.001531 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:18.001536 | orchestrator | 2026-03-19 02:32:18.001540 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-19 02:32:18.001567 | orchestrator | Thursday 19 March 2026 02:31:21 +0000 (0:00:00.542) 0:03:48.859 ******** 2026-03-19 02:32:18.001574 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:18.001583 | orchestrator | 2026-03-19 02:32:18.001591 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-19 02:32:18.001597 | orchestrator | Thursday 19 March 2026 02:31:21 +0000 (0:00:00.555) 0:03:49.415 ******** 2026-03-19 02:32:18.001605 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001612 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001620 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001628 | orchestrator | 2026-03-19 02:32:18.001635 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-19 02:32:18.001643 | orchestrator | Thursday 19 March 2026 02:31:23 +0000 (0:00:01.823) 0:03:51.238 ******** 2026-03-19 02:32:18.001648 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001653 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001658 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001663 | orchestrator | 2026-03-19 02:32:18.001668 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-19 02:32:18.001673 | orchestrator | Thursday 19 March 2026 02:31:25 +0000 (0:00:01.626) 0:03:52.865 ******** 2026-03-19 02:32:18.001678 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001683 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001688 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001693 | orchestrator | 2026-03-19 02:32:18.001699 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-19 02:32:18.001704 | orchestrator | Thursday 19 March 2026 02:31:27 +0000 (0:00:01.904) 0:03:54.769 ******** 2026-03-19 02:32:18.001709 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:32:18.001714 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:32:18.001719 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:32:18.001724 | orchestrator | 2026-03-19 02:32:18.001729 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-19 02:32:18.001734 | orchestrator | Thursday 19 March 2026 02:31:29 +0000 (0:00:02.134) 0:03:56.903 ******** 2026-03-19 02:32:18.001739 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:18.001744 | orchestrator | 2026-03-19 02:32:18.001749 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-19 02:32:18.001754 | orchestrator | Thursday 19 March 2026 02:31:30 +0000 (0:00:00.846) 0:03:57.750 ******** 2026-03-19 02:32:18.001763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-19 02:32:18.001768 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001774 | orchestrator | 2026-03-19 02:32:18.001781 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-19 02:32:18.001788 | orchestrator | Thursday 19 March 2026 02:31:52 +0000 (0:00:22.019) 0:04:19.769 ******** 2026-03-19 02:32:18.001795 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:18.001803 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:18.001811 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:18.001819 | orchestrator | 2026-03-19 02:32:18.001824 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-19 02:32:18.001829 | orchestrator | Thursday 19 March 2026 02:32:01 +0000 (0:00:09.154) 0:04:28.924 ******** 2026-03-19 02:32:18.001834 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:18.001839 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:18.001844 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:18.001849 | orchestrator | 2026-03-19 02:32:18.001858 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-19 02:32:18.001863 | orchestrator | Thursday 19 March 2026 02:32:01 +0000 (0:00:00.317) 0:04:29.242 ******** 2026-03-19 02:32:18.001870 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 02:32:18.001877 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 02:32:18.001884 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 02:32:18.001895 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 02:32:31.393011 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 02:32:31.393129 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1eb86b5ba4aec48a78176953b4d4c0bfd9af1c83'}])  2026-03-19 02:32:31.393141 | orchestrator | 2026-03-19 02:32:31.393150 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:32:31.393158 | orchestrator | Thursday 19 March 2026 02:32:17 +0000 (0:00:16.209) 0:04:45.451 ******** 2026-03-19 02:32:31.393165 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393172 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393179 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393185 | orchestrator | 2026-03-19 02:32:31.393191 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-19 02:32:31.393197 | orchestrator | Thursday 19 March 2026 02:32:18 +0000 (0:00:00.356) 0:04:45.807 ******** 2026-03-19 02:32:31.393204 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:31.393211 | orchestrator | 2026-03-19 02:32:31.393217 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-19 02:32:31.393223 | orchestrator | Thursday 19 March 2026 02:32:19 +0000 (0:00:00.786) 0:04:46.594 ******** 2026-03-19 02:32:31.393229 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393236 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393243 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393250 | orchestrator | 2026-03-19 02:32:31.393256 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-19 02:32:31.393284 | orchestrator | Thursday 19 March 2026 02:32:19 +0000 (0:00:00.366) 0:04:46.961 ******** 2026-03-19 02:32:31.393305 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393311 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393317 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393323 | orchestrator | 2026-03-19 02:32:31.393329 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-19 02:32:31.393336 | orchestrator | Thursday 19 March 2026 02:32:19 +0000 (0:00:00.339) 0:04:47.300 ******** 2026-03-19 02:32:31.393342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 02:32:31.393348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 02:32:31.393354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 02:32:31.393361 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393367 | orchestrator | 2026-03-19 02:32:31.393373 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-19 02:32:31.393379 | orchestrator | Thursday 19 March 2026 02:32:20 +0000 (0:00:00.888) 0:04:48.188 ******** 2026-03-19 02:32:31.393385 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393391 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393397 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393403 | orchestrator | 2026-03-19 02:32:31.393409 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-19 02:32:31.393415 | orchestrator | 2026-03-19 02:32:31.393422 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:32:31.393428 | orchestrator | Thursday 19 March 2026 02:32:21 +0000 (0:00:00.829) 0:04:49.018 ******** 2026-03-19 02:32:31.393435 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:31.393442 | orchestrator | 2026-03-19 02:32:31.393448 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:32:31.393454 | orchestrator | Thursday 19 March 2026 02:32:22 +0000 (0:00:00.515) 0:04:49.534 ******** 2026-03-19 02:32:31.393461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:32:31.393467 | orchestrator | 2026-03-19 02:32:31.393473 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:32:31.393479 | orchestrator | Thursday 19 March 2026 02:32:22 +0000 (0:00:00.732) 0:04:50.266 ******** 2026-03-19 02:32:31.393485 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393491 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393497 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393503 | orchestrator | 2026-03-19 02:32:31.393509 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:32:31.393515 | orchestrator | Thursday 19 March 2026 02:32:23 +0000 (0:00:00.747) 0:04:51.014 ******** 2026-03-19 02:32:31.393522 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393528 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393534 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393609 | orchestrator | 2026-03-19 02:32:31.393619 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:32:31.393626 | orchestrator | Thursday 19 March 2026 02:32:23 +0000 (0:00:00.332) 0:04:51.347 ******** 2026-03-19 02:32:31.393633 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393640 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393647 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393654 | orchestrator | 2026-03-19 02:32:31.393675 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:32:31.393682 | orchestrator | Thursday 19 March 2026 02:32:24 +0000 (0:00:00.562) 0:04:51.909 ******** 2026-03-19 02:32:31.393689 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393697 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393710 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393716 | orchestrator | 2026-03-19 02:32:31.393723 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:32:31.393730 | orchestrator | Thursday 19 March 2026 02:32:24 +0000 (0:00:00.365) 0:04:52.275 ******** 2026-03-19 02:32:31.393737 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393745 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393751 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393758 | orchestrator | 2026-03-19 02:32:31.393765 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:32:31.393772 | orchestrator | Thursday 19 March 2026 02:32:25 +0000 (0:00:00.718) 0:04:52.993 ******** 2026-03-19 02:32:31.393780 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393786 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393793 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393800 | orchestrator | 2026-03-19 02:32:31.393807 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:32:31.393814 | orchestrator | Thursday 19 March 2026 02:32:25 +0000 (0:00:00.347) 0:04:53.340 ******** 2026-03-19 02:32:31.393820 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393828 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393834 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393841 | orchestrator | 2026-03-19 02:32:31.393848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:32:31.393855 | orchestrator | Thursday 19 March 2026 02:32:26 +0000 (0:00:00.583) 0:04:53.924 ******** 2026-03-19 02:32:31.393862 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393870 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393876 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393883 | orchestrator | 2026-03-19 02:32:31.393891 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:32:31.393898 | orchestrator | Thursday 19 March 2026 02:32:27 +0000 (0:00:00.753) 0:04:54.677 ******** 2026-03-19 02:32:31.393905 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393911 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.393919 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.393926 | orchestrator | 2026-03-19 02:32:31.393933 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:32:31.393940 | orchestrator | Thursday 19 March 2026 02:32:27 +0000 (0:00:00.757) 0:04:55.435 ******** 2026-03-19 02:32:31.393947 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.393953 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.393964 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.393971 | orchestrator | 2026-03-19 02:32:31.393977 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:32:31.393983 | orchestrator | Thursday 19 March 2026 02:32:28 +0000 (0:00:00.308) 0:04:55.744 ******** 2026-03-19 02:32:31.393990 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.393996 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.394002 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.394008 | orchestrator | 2026-03-19 02:32:31.394058 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:32:31.394065 | orchestrator | Thursday 19 March 2026 02:32:28 +0000 (0:00:00.566) 0:04:56.310 ******** 2026-03-19 02:32:31.394072 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.394078 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.394084 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.394090 | orchestrator | 2026-03-19 02:32:31.394096 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:32:31.394102 | orchestrator | Thursday 19 March 2026 02:32:29 +0000 (0:00:00.309) 0:04:56.620 ******** 2026-03-19 02:32:31.394109 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.394115 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.394121 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.394127 | orchestrator | 2026-03-19 02:32:31.394147 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:32:31.394153 | orchestrator | Thursday 19 March 2026 02:32:29 +0000 (0:00:00.313) 0:04:56.933 ******** 2026-03-19 02:32:31.394160 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.394166 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.394172 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.394178 | orchestrator | 2026-03-19 02:32:31.394184 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:32:31.394190 | orchestrator | Thursday 19 March 2026 02:32:29 +0000 (0:00:00.326) 0:04:57.260 ******** 2026-03-19 02:32:31.394197 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.394203 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.394209 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.394215 | orchestrator | 2026-03-19 02:32:31.394221 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:32:31.394228 | orchestrator | Thursday 19 March 2026 02:32:30 +0000 (0:00:00.580) 0:04:57.840 ******** 2026-03-19 02:32:31.394234 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:32:31.394240 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:32:31.394246 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:32:31.394252 | orchestrator | 2026-03-19 02:32:31.394258 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:32:31.394267 | orchestrator | Thursday 19 March 2026 02:32:30 +0000 (0:00:00.333) 0:04:58.174 ******** 2026-03-19 02:32:31.394279 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.394289 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.394299 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.394309 | orchestrator | 2026-03-19 02:32:31.394319 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:32:31.394329 | orchestrator | Thursday 19 March 2026 02:32:31 +0000 (0:00:00.337) 0:04:58.511 ******** 2026-03-19 02:32:31.394338 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:32:31.394348 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:32:31.394358 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:32:31.394367 | orchestrator | 2026-03-19 02:32:31.394377 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:32:31.394396 | orchestrator | Thursday 19 March 2026 02:32:31 +0000 (0:00:00.331) 0:04:58.843 ******** 2026-03-19 02:33:40.667831 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:33:40.667981 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:33:40.668007 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:33:40.668028 | orchestrator | 2026-03-19 02:33:40.668049 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 02:33:40.668068 | orchestrator | Thursday 19 March 2026 02:32:32 +0000 (0:00:00.808) 0:04:59.651 ******** 2026-03-19 02:33:40.668087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 02:33:40.668107 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:33:40.668126 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:33:40.668146 | orchestrator | 2026-03-19 02:33:40.668164 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 02:33:40.668182 | orchestrator | Thursday 19 March 2026 02:32:32 +0000 (0:00:00.637) 0:05:00.288 ******** 2026-03-19 02:33:40.668201 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:33:40.668221 | orchestrator | 2026-03-19 02:33:40.668241 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-19 02:33:40.668259 | orchestrator | Thursday 19 March 2026 02:32:33 +0000 (0:00:00.777) 0:05:01.066 ******** 2026-03-19 02:33:40.668276 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:33:40.668295 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:33:40.668314 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:33:40.668334 | orchestrator | 2026-03-19 02:33:40.668354 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-19 02:33:40.668411 | orchestrator | Thursday 19 March 2026 02:32:34 +0000 (0:00:00.706) 0:05:01.772 ******** 2026-03-19 02:33:40.668431 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.668450 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.668469 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:33:40.668487 | orchestrator | 2026-03-19 02:33:40.668505 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-19 02:33:40.668524 | orchestrator | Thursday 19 March 2026 02:32:34 +0000 (0:00:00.357) 0:05:02.130 ******** 2026-03-19 02:33:40.668578 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:33:40.668598 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:33:40.668616 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:33:40.668634 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-19 02:33:40.668653 | orchestrator | 2026-03-19 02:33:40.668694 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-19 02:33:40.668715 | orchestrator | Thursday 19 March 2026 02:32:45 +0000 (0:00:10.859) 0:05:12.989 ******** 2026-03-19 02:33:40.668732 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:33:40.668751 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:33:40.668770 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:33:40.668789 | orchestrator | 2026-03-19 02:33:40.668808 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-19 02:33:40.668841 | orchestrator | Thursday 19 March 2026 02:32:45 +0000 (0:00:00.371) 0:05:13.360 ******** 2026-03-19 02:33:40.668860 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 02:33:40.668879 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 02:33:40.668898 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 02:33:40.668916 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 02:33:40.668934 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:33:40.668953 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:33:40.668971 | orchestrator | 2026-03-19 02:33:40.668990 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-19 02:33:40.669009 | orchestrator | Thursday 19 March 2026 02:32:48 +0000 (0:00:02.548) 0:05:15.908 ******** 2026-03-19 02:33:40.669027 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 02:33:40.669045 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 02:33:40.669063 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 02:33:40.669082 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:33:40.669099 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-19 02:33:40.669117 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-19 02:33:40.669133 | orchestrator | 2026-03-19 02:33:40.669152 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-19 02:33:40.669171 | orchestrator | Thursday 19 March 2026 02:32:49 +0000 (0:00:01.212) 0:05:17.121 ******** 2026-03-19 02:33:40.669189 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:33:40.669207 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:33:40.669224 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:33:40.669240 | orchestrator | 2026-03-19 02:33:40.669257 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-19 02:33:40.669276 | orchestrator | Thursday 19 March 2026 02:32:50 +0000 (0:00:00.728) 0:05:17.850 ******** 2026-03-19 02:33:40.669295 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.669312 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.669331 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:33:40.669349 | orchestrator | 2026-03-19 02:33:40.669367 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 02:33:40.669386 | orchestrator | Thursday 19 March 2026 02:32:50 +0000 (0:00:00.332) 0:05:18.182 ******** 2026-03-19 02:33:40.669421 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.669440 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.669459 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:33:40.669479 | orchestrator | 2026-03-19 02:33:40.669497 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 02:33:40.669516 | orchestrator | Thursday 19 March 2026 02:32:51 +0000 (0:00:00.595) 0:05:18.777 ******** 2026-03-19 02:33:40.669560 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:33:40.669581 | orchestrator | 2026-03-19 02:33:40.669627 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-19 02:33:40.669646 | orchestrator | Thursday 19 March 2026 02:32:51 +0000 (0:00:00.564) 0:05:19.341 ******** 2026-03-19 02:33:40.669665 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.669683 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.669703 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:33:40.669722 | orchestrator | 2026-03-19 02:33:40.669740 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-19 02:33:40.669754 | orchestrator | Thursday 19 March 2026 02:32:52 +0000 (0:00:00.368) 0:05:19.710 ******** 2026-03-19 02:33:40.669765 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.669775 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.669786 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:33:40.669797 | orchestrator | 2026-03-19 02:33:40.669807 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-19 02:33:40.669818 | orchestrator | Thursday 19 March 2026 02:32:52 +0000 (0:00:00.618) 0:05:20.328 ******** 2026-03-19 02:33:40.669829 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:33:40.669839 | orchestrator | 2026-03-19 02:33:40.669850 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-19 02:33:40.669861 | orchestrator | Thursday 19 March 2026 02:32:53 +0000 (0:00:00.576) 0:05:20.905 ******** 2026-03-19 02:33:40.669871 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:33:40.669882 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:33:40.669892 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:33:40.669903 | orchestrator | 2026-03-19 02:33:40.669913 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-19 02:33:40.669924 | orchestrator | Thursday 19 March 2026 02:32:54 +0000 (0:00:01.293) 0:05:22.198 ******** 2026-03-19 02:33:40.669934 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:33:40.669945 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:33:40.669956 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:33:40.669967 | orchestrator | 2026-03-19 02:33:40.669977 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-19 02:33:40.669988 | orchestrator | Thursday 19 March 2026 02:32:56 +0000 (0:00:01.575) 0:05:23.773 ******** 2026-03-19 02:33:40.669998 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:33:40.670009 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:33:40.670084 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:33:40.670095 | orchestrator | 2026-03-19 02:33:40.670106 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-19 02:33:40.670126 | orchestrator | Thursday 19 March 2026 02:32:58 +0000 (0:00:02.035) 0:05:25.809 ******** 2026-03-19 02:33:40.670137 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:33:40.670148 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:33:40.670159 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:33:40.670169 | orchestrator | 2026-03-19 02:33:40.670180 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 02:33:40.670193 | orchestrator | Thursday 19 March 2026 02:33:01 +0000 (0:00:02.888) 0:05:28.698 ******** 2026-03-19 02:33:40.670213 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:33:40.670232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:33:40.670252 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-19 02:33:40.670284 | orchestrator | 2026-03-19 02:33:40.670302 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-19 02:33:40.670320 | orchestrator | Thursday 19 March 2026 02:33:01 +0000 (0:00:00.657) 0:05:29.355 ******** 2026-03-19 02:33:40.670338 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-19 02:33:40.670357 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-19 02:33:40.670376 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-19 02:33:40.670397 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-19 02:33:40.670416 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-19 02:33:40.670435 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:33:40.670454 | orchestrator | 2026-03-19 02:33:40.670473 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-19 02:33:40.670493 | orchestrator | Thursday 19 March 2026 02:33:32 +0000 (0:00:30.430) 0:05:59.785 ******** 2026-03-19 02:33:40.670513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:33:40.670572 | orchestrator | 2026-03-19 02:33:40.670593 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-19 02:33:40.670612 | orchestrator | Thursday 19 March 2026 02:33:33 +0000 (0:00:01.362) 0:06:01.148 ******** 2026-03-19 02:33:40.670628 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:33:40.670639 | orchestrator | 2026-03-19 02:33:40.670650 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-19 02:33:40.670660 | orchestrator | Thursday 19 March 2026 02:33:33 +0000 (0:00:00.301) 0:06:01.449 ******** 2026-03-19 02:33:40.670671 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:33:40.670682 | orchestrator | 2026-03-19 02:33:40.670693 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-19 02:33:40.670703 | orchestrator | Thursday 19 March 2026 02:33:34 +0000 (0:00:00.160) 0:06:01.610 ******** 2026-03-19 02:33:40.670714 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-19 02:33:40.670725 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-19 02:33:40.670735 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-19 02:33:40.670746 | orchestrator | 2026-03-19 02:33:40.670768 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-19 02:34:02.602330 | orchestrator | Thursday 19 March 2026 02:33:40 +0000 (0:00:06.504) 0:06:08.115 ******** 2026-03-19 02:34:02.602460 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-19 02:34:02.602477 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-19 02:34:02.602490 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-19 02:34:02.602502 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-19 02:34:02.602514 | orchestrator | 2026-03-19 02:34:02.602526 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:34:02.602587 | orchestrator | Thursday 19 March 2026 02:33:45 +0000 (0:00:05.272) 0:06:13.387 ******** 2026-03-19 02:34:02.602599 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:34:02.602611 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:34:02.602623 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:34:02.602634 | orchestrator | 2026-03-19 02:34:02.602645 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-19 02:34:02.602657 | orchestrator | Thursday 19 March 2026 02:33:46 +0000 (0:00:00.717) 0:06:14.105 ******** 2026-03-19 02:34:02.602668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:34:02.602707 | orchestrator | 2026-03-19 02:34:02.602719 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-19 02:34:02.602730 | orchestrator | Thursday 19 March 2026 02:33:47 +0000 (0:00:00.546) 0:06:14.651 ******** 2026-03-19 02:34:02.602741 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:34:02.602752 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:34:02.602763 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:34:02.602774 | orchestrator | 2026-03-19 02:34:02.602785 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-19 02:34:02.602796 | orchestrator | Thursday 19 March 2026 02:33:47 +0000 (0:00:00.617) 0:06:15.268 ******** 2026-03-19 02:34:02.602807 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:34:02.602818 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:34:02.602828 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:34:02.602839 | orchestrator | 2026-03-19 02:34:02.602850 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-19 02:34:02.602865 | orchestrator | Thursday 19 March 2026 02:33:49 +0000 (0:00:01.254) 0:06:16.523 ******** 2026-03-19 02:34:02.602895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 02:34:02.602908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 02:34:02.602920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 02:34:02.602933 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:34:02.602945 | orchestrator | 2026-03-19 02:34:02.602958 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-19 02:34:02.602971 | orchestrator | Thursday 19 March 2026 02:33:49 +0000 (0:00:00.631) 0:06:17.155 ******** 2026-03-19 02:34:02.602984 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:34:02.602997 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:34:02.603009 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:34:02.603022 | orchestrator | 2026-03-19 02:34:02.603034 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-19 02:34:02.603048 | orchestrator | 2026-03-19 02:34:02.603061 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:34:02.603074 | orchestrator | Thursday 19 March 2026 02:33:50 +0000 (0:00:00.578) 0:06:17.733 ******** 2026-03-19 02:34:02.603088 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:34:02.603102 | orchestrator | 2026-03-19 02:34:02.603115 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:34:02.603128 | orchestrator | Thursday 19 March 2026 02:33:51 +0000 (0:00:00.797) 0:06:18.531 ******** 2026-03-19 02:34:02.603140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:34:02.603153 | orchestrator | 2026-03-19 02:34:02.603166 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:34:02.603178 | orchestrator | Thursday 19 March 2026 02:33:51 +0000 (0:00:00.750) 0:06:19.282 ******** 2026-03-19 02:34:02.603190 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.603203 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.603216 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.603228 | orchestrator | 2026-03-19 02:34:02.603240 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:34:02.603251 | orchestrator | Thursday 19 March 2026 02:33:52 +0000 (0:00:00.349) 0:06:19.631 ******** 2026-03-19 02:34:02.603262 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.603273 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.603284 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.603294 | orchestrator | 2026-03-19 02:34:02.603305 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:34:02.603316 | orchestrator | Thursday 19 March 2026 02:33:52 +0000 (0:00:00.700) 0:06:20.332 ******** 2026-03-19 02:34:02.603335 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.603346 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.603357 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.603368 | orchestrator | 2026-03-19 02:34:02.603378 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:34:02.603389 | orchestrator | Thursday 19 March 2026 02:33:53 +0000 (0:00:00.746) 0:06:21.078 ******** 2026-03-19 02:34:02.603400 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.603411 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.603421 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.603432 | orchestrator | 2026-03-19 02:34:02.603443 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:34:02.603453 | orchestrator | Thursday 19 March 2026 02:33:54 +0000 (0:00:01.030) 0:06:22.109 ******** 2026-03-19 02:34:02.603464 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.603475 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.603506 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.603518 | orchestrator | 2026-03-19 02:34:02.603582 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:34:02.603610 | orchestrator | Thursday 19 March 2026 02:33:54 +0000 (0:00:00.341) 0:06:22.450 ******** 2026-03-19 02:34:02.603632 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.603650 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.603670 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.603688 | orchestrator | 2026-03-19 02:34:02.603707 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:34:02.603722 | orchestrator | Thursday 19 March 2026 02:33:55 +0000 (0:00:00.349) 0:06:22.800 ******** 2026-03-19 02:34:02.603733 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.603801 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.603815 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.603826 | orchestrator | 2026-03-19 02:34:02.603837 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:34:02.603848 | orchestrator | Thursday 19 March 2026 02:33:55 +0000 (0:00:00.321) 0:06:23.121 ******** 2026-03-19 02:34:02.603858 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.603869 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.603880 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.603891 | orchestrator | 2026-03-19 02:34:02.603902 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:34:02.603913 | orchestrator | Thursday 19 March 2026 02:33:57 +0000 (0:00:01.366) 0:06:24.488 ******** 2026-03-19 02:34:02.603924 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.603935 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.603945 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.603956 | orchestrator | 2026-03-19 02:34:02.603967 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:34:02.603978 | orchestrator | Thursday 19 March 2026 02:33:57 +0000 (0:00:00.711) 0:06:25.199 ******** 2026-03-19 02:34:02.603989 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.604000 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.604010 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.604021 | orchestrator | 2026-03-19 02:34:02.604033 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:34:02.604043 | orchestrator | Thursday 19 March 2026 02:33:58 +0000 (0:00:00.351) 0:06:25.551 ******** 2026-03-19 02:34:02.604054 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.604065 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.604076 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.604087 | orchestrator | 2026-03-19 02:34:02.604106 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:34:02.604117 | orchestrator | Thursday 19 March 2026 02:33:58 +0000 (0:00:00.316) 0:06:25.867 ******** 2026-03-19 02:34:02.604128 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604139 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604160 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604171 | orchestrator | 2026-03-19 02:34:02.604182 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:34:02.604193 | orchestrator | Thursday 19 March 2026 02:33:59 +0000 (0:00:00.634) 0:06:26.502 ******** 2026-03-19 02:34:02.604204 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604215 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604226 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604237 | orchestrator | 2026-03-19 02:34:02.604248 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:34:02.604259 | orchestrator | Thursday 19 March 2026 02:33:59 +0000 (0:00:00.389) 0:06:26.891 ******** 2026-03-19 02:34:02.604270 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604281 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604291 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604302 | orchestrator | 2026-03-19 02:34:02.604312 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:34:02.604324 | orchestrator | Thursday 19 March 2026 02:33:59 +0000 (0:00:00.367) 0:06:27.259 ******** 2026-03-19 02:34:02.604334 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.604345 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.604356 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.604367 | orchestrator | 2026-03-19 02:34:02.604378 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:34:02.604389 | orchestrator | Thursday 19 March 2026 02:34:00 +0000 (0:00:00.331) 0:06:27.591 ******** 2026-03-19 02:34:02.604400 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.604410 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.604421 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.604432 | orchestrator | 2026-03-19 02:34:02.604443 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:34:02.604454 | orchestrator | Thursday 19 March 2026 02:34:00 +0000 (0:00:00.607) 0:06:28.198 ******** 2026-03-19 02:34:02.604464 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:34:02.604475 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:34:02.604486 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:34:02.604497 | orchestrator | 2026-03-19 02:34:02.604508 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:34:02.604519 | orchestrator | Thursday 19 March 2026 02:34:01 +0000 (0:00:00.337) 0:06:28.535 ******** 2026-03-19 02:34:02.604565 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604587 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604606 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604624 | orchestrator | 2026-03-19 02:34:02.604642 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:34:02.604656 | orchestrator | Thursday 19 March 2026 02:34:01 +0000 (0:00:00.361) 0:06:28.897 ******** 2026-03-19 02:34:02.604679 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604704 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604721 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604739 | orchestrator | 2026-03-19 02:34:02.604756 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-19 02:34:02.604776 | orchestrator | Thursday 19 March 2026 02:34:02 +0000 (0:00:00.793) 0:06:29.690 ******** 2026-03-19 02:34:02.604794 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:34:02.604812 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:34:02.604828 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:34:02.604839 | orchestrator | 2026-03-19 02:34:02.604862 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-19 02:35:01.615370 | orchestrator | Thursday 19 March 2026 02:34:02 +0000 (0:00:00.361) 0:06:30.052 ******** 2026-03-19 02:35:01.615469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:35:01.615479 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:35:01.615507 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:35:01.615514 | orchestrator | 2026-03-19 02:35:01.615521 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-19 02:35:01.615576 | orchestrator | Thursday 19 March 2026 02:34:03 +0000 (0:00:00.659) 0:06:30.711 ******** 2026-03-19 02:35:01.615582 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:01.615589 | orchestrator | 2026-03-19 02:35:01.615595 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-19 02:35:01.615600 | orchestrator | Thursday 19 March 2026 02:34:04 +0000 (0:00:00.821) 0:06:31.533 ******** 2026-03-19 02:35:01.615606 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:01.615614 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:01.615620 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:01.615626 | orchestrator | 2026-03-19 02:35:01.615632 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-19 02:35:01.615638 | orchestrator | Thursday 19 March 2026 02:34:04 +0000 (0:00:00.320) 0:06:31.854 ******** 2026-03-19 02:35:01.615643 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:01.615649 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:01.615655 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:01.615661 | orchestrator | 2026-03-19 02:35:01.615666 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-19 02:35:01.615672 | orchestrator | Thursday 19 March 2026 02:34:04 +0000 (0:00:00.309) 0:06:32.163 ******** 2026-03-19 02:35:01.615678 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:35:01.615684 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:35:01.615690 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:35:01.615696 | orchestrator | 2026-03-19 02:35:01.615702 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-19 02:35:01.615707 | orchestrator | Thursday 19 March 2026 02:34:05 +0000 (0:00:00.619) 0:06:32.783 ******** 2026-03-19 02:35:01.615713 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:35:01.615732 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:35:01.615738 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:35:01.615744 | orchestrator | 2026-03-19 02:35:01.615750 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-19 02:35:01.615755 | orchestrator | Thursday 19 March 2026 02:34:05 +0000 (0:00:00.635) 0:06:33.418 ******** 2026-03-19 02:35:01.615761 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 02:35:01.615768 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 02:35:01.615774 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 02:35:01.615781 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 02:35:01.615786 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 02:35:01.615792 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 02:35:01.615798 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 02:35:01.615803 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 02:35:01.615809 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 02:35:01.615815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 02:35:01.615821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 02:35:01.615827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 02:35:01.615832 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 02:35:01.615844 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 02:35:01.615849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 02:35:01.615855 | orchestrator | 2026-03-19 02:35:01.615861 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-19 02:35:01.615867 | orchestrator | Thursday 19 March 2026 02:34:09 +0000 (0:00:03.053) 0:06:36.471 ******** 2026-03-19 02:35:01.615873 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:01.615878 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:01.615884 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:01.615890 | orchestrator | 2026-03-19 02:35:01.615896 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-19 02:35:01.615902 | orchestrator | Thursday 19 March 2026 02:34:09 +0000 (0:00:00.312) 0:06:36.784 ******** 2026-03-19 02:35:01.615907 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:01.615913 | orchestrator | 2026-03-19 02:35:01.615920 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-19 02:35:01.615927 | orchestrator | Thursday 19 March 2026 02:34:10 +0000 (0:00:00.816) 0:06:37.601 ******** 2026-03-19 02:35:01.615934 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 02:35:01.615955 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 02:35:01.615962 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 02:35:01.615969 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-19 02:35:01.615976 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-19 02:35:01.615983 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-19 02:35:01.615990 | orchestrator | 2026-03-19 02:35:01.615997 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-19 02:35:01.616003 | orchestrator | Thursday 19 March 2026 02:34:11 +0000 (0:00:01.058) 0:06:38.659 ******** 2026-03-19 02:35:01.616010 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:35:01.616017 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:35:01.616023 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:35:01.616030 | orchestrator | 2026-03-19 02:35:01.616037 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-19 02:35:01.616043 | orchestrator | Thursday 19 March 2026 02:34:13 +0000 (0:00:02.377) 0:06:41.037 ******** 2026-03-19 02:35:01.616050 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:35:01.616057 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:35:01.616064 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:35:01.616071 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:35:01.616078 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 02:35:01.616085 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:35:01.616091 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:35:01.616098 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 02:35:01.616105 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:35:01.616112 | orchestrator | 2026-03-19 02:35:01.616118 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-19 02:35:01.616125 | orchestrator | Thursday 19 March 2026 02:34:14 +0000 (0:00:01.236) 0:06:42.273 ******** 2026-03-19 02:35:01.616132 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:35:01.616139 | orchestrator | 2026-03-19 02:35:01.616145 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-19 02:35:01.616156 | orchestrator | Thursday 19 March 2026 02:34:17 +0000 (0:00:02.324) 0:06:44.597 ******** 2026-03-19 02:35:01.616163 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:01.616174 | orchestrator | 2026-03-19 02:35:01.616181 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-19 02:35:01.616188 | orchestrator | Thursday 19 March 2026 02:34:17 +0000 (0:00:00.810) 0:06:45.408 ******** 2026-03-19 02:35:01.616196 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}) 2026-03-19 02:35:01.616204 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}) 2026-03-19 02:35:01.616211 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}) 2026-03-19 02:35:01.616217 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}) 2026-03-19 02:35:01.616224 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}) 2026-03-19 02:35:01.616231 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}) 2026-03-19 02:35:01.616238 | orchestrator | 2026-03-19 02:35:01.616244 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-19 02:35:01.616252 | orchestrator | Thursday 19 March 2026 02:34:57 +0000 (0:00:39.497) 0:07:24.905 ******** 2026-03-19 02:35:01.616259 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:01.616266 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:01.616273 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:01.616279 | orchestrator | 2026-03-19 02:35:01.616285 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-19 02:35:01.616290 | orchestrator | Thursday 19 March 2026 02:34:57 +0000 (0:00:00.280) 0:07:25.186 ******** 2026-03-19 02:35:01.616296 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:01.616302 | orchestrator | 2026-03-19 02:35:01.616308 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-19 02:35:01.616314 | orchestrator | Thursday 19 March 2026 02:34:58 +0000 (0:00:00.666) 0:07:25.852 ******** 2026-03-19 02:35:01.616319 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:35:01.616325 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:35:01.616331 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:35:01.616337 | orchestrator | 2026-03-19 02:35:01.616342 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-19 02:35:01.616348 | orchestrator | Thursday 19 March 2026 02:34:59 +0000 (0:00:00.651) 0:07:26.504 ******** 2026-03-19 02:35:01.616354 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:35:01.616360 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:35:01.616365 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:35:01.616371 | orchestrator | 2026-03-19 02:35:01.616377 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-19 02:35:01.616386 | orchestrator | Thursday 19 March 2026 02:35:01 +0000 (0:00:02.556) 0:07:29.060 ******** 2026-03-19 02:35:36.798752 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:36.798844 | orchestrator | 2026-03-19 02:35:36.798853 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-19 02:35:36.798861 | orchestrator | Thursday 19 March 2026 02:35:02 +0000 (0:00:00.639) 0:07:29.700 ******** 2026-03-19 02:35:36.798866 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:35:36.798873 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:35:36.798879 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:35:36.798884 | orchestrator | 2026-03-19 02:35:36.798889 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-19 02:35:36.798915 | orchestrator | Thursday 19 March 2026 02:35:03 +0000 (0:00:01.210) 0:07:30.911 ******** 2026-03-19 02:35:36.798920 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:35:36.798925 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:35:36.798930 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:35:36.798935 | orchestrator | 2026-03-19 02:35:36.798941 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-19 02:35:36.798946 | orchestrator | Thursday 19 March 2026 02:35:04 +0000 (0:00:01.108) 0:07:32.019 ******** 2026-03-19 02:35:36.798952 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:35:36.798957 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:35:36.798962 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:35:36.798968 | orchestrator | 2026-03-19 02:35:36.798973 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-19 02:35:36.798978 | orchestrator | Thursday 19 March 2026 02:35:06 +0000 (0:00:02.006) 0:07:34.026 ******** 2026-03-19 02:35:36.798983 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.798988 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.798993 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.798998 | orchestrator | 2026-03-19 02:35:36.799003 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-19 02:35:36.799009 | orchestrator | Thursday 19 March 2026 02:35:06 +0000 (0:00:00.293) 0:07:34.319 ******** 2026-03-19 02:35:36.799014 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799019 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799024 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799029 | orchestrator | 2026-03-19 02:35:36.799034 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-19 02:35:36.799051 | orchestrator | Thursday 19 March 2026 02:35:07 +0000 (0:00:00.301) 0:07:34.621 ******** 2026-03-19 02:35:36.799057 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 02:35:36.799062 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-19 02:35:36.799067 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-19 02:35:36.799072 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-19 02:35:36.799077 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-19 02:35:36.799082 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-19 02:35:36.799087 | orchestrator | 2026-03-19 02:35:36.799092 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-19 02:35:36.799097 | orchestrator | Thursday 19 March 2026 02:35:08 +0000 (0:00:01.015) 0:07:35.637 ******** 2026-03-19 02:35:36.799103 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-19 02:35:36.799109 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-19 02:35:36.799114 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-19 02:35:36.799119 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-19 02:35:36.799125 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-19 02:35:36.799130 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 02:35:36.799135 | orchestrator | 2026-03-19 02:35:36.799140 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-19 02:35:36.799145 | orchestrator | Thursday 19 March 2026 02:35:10 +0000 (0:00:02.358) 0:07:37.995 ******** 2026-03-19 02:35:36.799150 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-19 02:35:36.799167 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-19 02:35:36.799173 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-19 02:35:36.799178 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-19 02:35:36.799183 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-19 02:35:36.799188 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 02:35:36.799193 | orchestrator | 2026-03-19 02:35:36.799198 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-19 02:35:36.799203 | orchestrator | Thursday 19 March 2026 02:35:14 +0000 (0:00:03.731) 0:07:41.727 ******** 2026-03-19 02:35:36.799221 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799226 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799231 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:35:36.799237 | orchestrator | 2026-03-19 02:35:36.799242 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-19 02:35:36.799247 | orchestrator | Thursday 19 March 2026 02:35:16 +0000 (0:00:02.522) 0:07:44.249 ******** 2026-03-19 02:35:36.799252 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799257 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799262 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-19 02:35:36.799268 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:35:36.799273 | orchestrator | 2026-03-19 02:35:36.799279 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-19 02:35:36.799284 | orchestrator | Thursday 19 March 2026 02:35:29 +0000 (0:00:12.712) 0:07:56.962 ******** 2026-03-19 02:35:36.799289 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799294 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799299 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799305 | orchestrator | 2026-03-19 02:35:36.799310 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:35:36.799315 | orchestrator | Thursday 19 March 2026 02:35:30 +0000 (0:00:01.043) 0:07:58.005 ******** 2026-03-19 02:35:36.799321 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799344 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799349 | orchestrator | 2026-03-19 02:35:36.799355 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-19 02:35:36.799360 | orchestrator | Thursday 19 March 2026 02:35:30 +0000 (0:00:00.331) 0:07:58.337 ******** 2026-03-19 02:35:36.799365 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:35:36.799370 | orchestrator | 2026-03-19 02:35:36.799376 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-19 02:35:36.799381 | orchestrator | Thursday 19 March 2026 02:35:31 +0000 (0:00:00.660) 0:07:58.998 ******** 2026-03-19 02:35:36.799386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:35:36.799391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:35:36.799397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:35:36.799402 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799407 | orchestrator | 2026-03-19 02:35:36.799412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-19 02:35:36.799417 | orchestrator | Thursday 19 March 2026 02:35:31 +0000 (0:00:00.375) 0:07:59.373 ******** 2026-03-19 02:35:36.799422 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799428 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799433 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799438 | orchestrator | 2026-03-19 02:35:36.799443 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-19 02:35:36.799449 | orchestrator | Thursday 19 March 2026 02:35:32 +0000 (0:00:00.287) 0:07:59.661 ******** 2026-03-19 02:35:36.799454 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799459 | orchestrator | 2026-03-19 02:35:36.799464 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-19 02:35:36.799469 | orchestrator | Thursday 19 March 2026 02:35:32 +0000 (0:00:00.200) 0:07:59.862 ******** 2026-03-19 02:35:36.799474 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799479 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799484 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799490 | orchestrator | 2026-03-19 02:35:36.799495 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-19 02:35:36.799506 | orchestrator | Thursday 19 March 2026 02:35:32 +0000 (0:00:00.480) 0:08:00.342 ******** 2026-03-19 02:35:36.799512 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799532 | orchestrator | 2026-03-19 02:35:36.799541 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-19 02:35:36.799549 | orchestrator | Thursday 19 March 2026 02:35:33 +0000 (0:00:00.218) 0:08:00.561 ******** 2026-03-19 02:35:36.799556 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799564 | orchestrator | 2026-03-19 02:35:36.799569 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-19 02:35:36.799574 | orchestrator | Thursday 19 March 2026 02:35:33 +0000 (0:00:00.200) 0:08:00.761 ******** 2026-03-19 02:35:36.799579 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799584 | orchestrator | 2026-03-19 02:35:36.799589 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-19 02:35:36.799595 | orchestrator | Thursday 19 March 2026 02:35:33 +0000 (0:00:00.122) 0:08:00.884 ******** 2026-03-19 02:35:36.799600 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799605 | orchestrator | 2026-03-19 02:35:36.799610 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-19 02:35:36.799615 | orchestrator | Thursday 19 March 2026 02:35:33 +0000 (0:00:00.208) 0:08:01.093 ******** 2026-03-19 02:35:36.799620 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799626 | orchestrator | 2026-03-19 02:35:36.799631 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-19 02:35:36.799636 | orchestrator | Thursday 19 March 2026 02:35:33 +0000 (0:00:00.210) 0:08:01.303 ******** 2026-03-19 02:35:36.799641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:35:36.799647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:35:36.799652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:35:36.799657 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799662 | orchestrator | 2026-03-19 02:35:36.799667 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-19 02:35:36.799672 | orchestrator | Thursday 19 March 2026 02:35:34 +0000 (0:00:00.365) 0:08:01.669 ******** 2026-03-19 02:35:36.799677 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799682 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:35:36.799688 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:35:36.799693 | orchestrator | 2026-03-19 02:35:36.799698 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-19 02:35:36.799703 | orchestrator | Thursday 19 March 2026 02:35:34 +0000 (0:00:00.282) 0:08:01.951 ******** 2026-03-19 02:35:36.799708 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799713 | orchestrator | 2026-03-19 02:35:36.799718 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-19 02:35:36.799723 | orchestrator | Thursday 19 March 2026 02:35:34 +0000 (0:00:00.207) 0:08:02.159 ******** 2026-03-19 02:35:36.799729 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:35:36.799734 | orchestrator | 2026-03-19 02:35:36.799739 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-19 02:35:36.799744 | orchestrator | 2026-03-19 02:35:36.799749 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:35:36.799754 | orchestrator | Thursday 19 March 2026 02:35:35 +0000 (0:00:01.025) 0:08:03.184 ******** 2026-03-19 02:35:36.799760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:35:36.799767 | orchestrator | 2026-03-19 02:35:36.799772 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:35:36.799781 | orchestrator | Thursday 19 March 2026 02:35:36 +0000 (0:00:01.064) 0:08:04.249 ******** 2026-03-19 02:36:00.484647 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:36:00.484765 | orchestrator | 2026-03-19 02:36:00.484777 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:36:00.484785 | orchestrator | Thursday 19 March 2026 02:35:37 +0000 (0:00:01.068) 0:08:05.317 ******** 2026-03-19 02:36:00.484791 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.484799 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.484806 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.484812 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.484819 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.484826 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.484832 | orchestrator | 2026-03-19 02:36:00.484839 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:36:00.484845 | orchestrator | Thursday 19 March 2026 02:35:38 +0000 (0:00:01.102) 0:08:06.419 ******** 2026-03-19 02:36:00.484852 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.484858 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.484864 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.484871 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.484877 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.484883 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.484890 | orchestrator | 2026-03-19 02:36:00.484896 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:36:00.484902 | orchestrator | Thursday 19 March 2026 02:35:39 +0000 (0:00:00.695) 0:08:07.115 ******** 2026-03-19 02:36:00.484908 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.484915 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.484921 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.484927 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.484934 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.484940 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.484946 | orchestrator | 2026-03-19 02:36:00.484952 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:36:00.484959 | orchestrator | Thursday 19 March 2026 02:35:40 +0000 (0:00:00.717) 0:08:07.833 ******** 2026-03-19 02:36:00.484965 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.484971 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.484978 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.484997 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485004 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485010 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485016 | orchestrator | 2026-03-19 02:36:00.485022 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:36:00.485029 | orchestrator | Thursday 19 March 2026 02:35:41 +0000 (0:00:00.675) 0:08:08.508 ******** 2026-03-19 02:36:00.485035 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485041 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485048 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485054 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485060 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.485067 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.485073 | orchestrator | 2026-03-19 02:36:00.485079 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:36:00.485086 | orchestrator | Thursday 19 March 2026 02:35:42 +0000 (0:00:01.088) 0:08:09.597 ******** 2026-03-19 02:36:00.485092 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485098 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485104 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485111 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485117 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485123 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485130 | orchestrator | 2026-03-19 02:36:00.485136 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:36:00.485148 | orchestrator | Thursday 19 March 2026 02:35:42 +0000 (0:00:00.548) 0:08:10.145 ******** 2026-03-19 02:36:00.485154 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485161 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485167 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485179 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485186 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485197 | orchestrator | 2026-03-19 02:36:00.485206 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:36:00.485222 | orchestrator | Thursday 19 March 2026 02:35:43 +0000 (0:00:00.672) 0:08:10.818 ******** 2026-03-19 02:36:00.485233 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485243 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485253 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485264 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485274 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.485283 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.485293 | orchestrator | 2026-03-19 02:36:00.485303 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:36:00.485313 | orchestrator | Thursday 19 March 2026 02:35:44 +0000 (0:00:01.090) 0:08:11.908 ******** 2026-03-19 02:36:00.485323 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485332 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485338 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485344 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485350 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.485357 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.485363 | orchestrator | 2026-03-19 02:36:00.485369 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:36:00.485375 | orchestrator | Thursday 19 March 2026 02:35:45 +0000 (0:00:01.152) 0:08:13.060 ******** 2026-03-19 02:36:00.485382 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485389 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485395 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485401 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485407 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485414 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485420 | orchestrator | 2026-03-19 02:36:00.485426 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:36:00.485432 | orchestrator | Thursday 19 March 2026 02:35:46 +0000 (0:00:00.536) 0:08:13.597 ******** 2026-03-19 02:36:00.485452 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485459 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485465 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485471 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485477 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.485484 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.485490 | orchestrator | 2026-03-19 02:36:00.485496 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:36:00.485502 | orchestrator | Thursday 19 March 2026 02:35:46 +0000 (0:00:00.736) 0:08:14.334 ******** 2026-03-19 02:36:00.485509 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485588 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485596 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485602 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485609 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485615 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485621 | orchestrator | 2026-03-19 02:36:00.485627 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:36:00.485634 | orchestrator | Thursday 19 March 2026 02:35:47 +0000 (0:00:00.543) 0:08:14.877 ******** 2026-03-19 02:36:00.485640 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485646 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485652 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485665 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485671 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485677 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485684 | orchestrator | 2026-03-19 02:36:00.485690 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:36:00.485696 | orchestrator | Thursday 19 March 2026 02:35:48 +0000 (0:00:00.703) 0:08:15.581 ******** 2026-03-19 02:36:00.485702 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485709 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485715 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485721 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485727 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485734 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485740 | orchestrator | 2026-03-19 02:36:00.485746 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:36:00.485752 | orchestrator | Thursday 19 March 2026 02:35:48 +0000 (0:00:00.563) 0:08:16.144 ******** 2026-03-19 02:36:00.485758 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485765 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485772 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485778 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485784 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485791 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485797 | orchestrator | 2026-03-19 02:36:00.485803 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:36:00.485810 | orchestrator | Thursday 19 March 2026 02:35:49 +0000 (0:00:00.707) 0:08:16.852 ******** 2026-03-19 02:36:00.485816 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485822 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485829 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485835 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:36:00.485841 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:36:00.485847 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:36:00.485853 | orchestrator | 2026-03-19 02:36:00.485860 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:36:00.485866 | orchestrator | Thursday 19 March 2026 02:35:49 +0000 (0:00:00.565) 0:08:17.417 ******** 2026-03-19 02:36:00.485872 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:00.485878 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:00.485885 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:00.485891 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485897 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.485903 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.485909 | orchestrator | 2026-03-19 02:36:00.485916 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:36:00.485962 | orchestrator | Thursday 19 March 2026 02:35:50 +0000 (0:00:00.721) 0:08:18.139 ******** 2026-03-19 02:36:00.485969 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.485975 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.485981 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.485988 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.485994 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.486000 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.486006 | orchestrator | 2026-03-19 02:36:00.486078 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:36:00.486087 | orchestrator | Thursday 19 March 2026 02:35:51 +0000 (0:00:00.604) 0:08:18.744 ******** 2026-03-19 02:36:00.486094 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:00.486100 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:00.486106 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:00.486113 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.486119 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:00.486125 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:00.486132 | orchestrator | 2026-03-19 02:36:00.486138 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-19 02:36:00.486152 | orchestrator | Thursday 19 March 2026 02:35:52 +0000 (0:00:01.121) 0:08:19.865 ******** 2026-03-19 02:36:00.486158 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:36:00.486164 | orchestrator | 2026-03-19 02:36:00.486171 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-19 02:36:00.486177 | orchestrator | Thursday 19 March 2026 02:35:56 +0000 (0:00:04.227) 0:08:24.092 ******** 2026-03-19 02:36:00.486183 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:36:00.486190 | orchestrator | 2026-03-19 02:36:00.486196 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-19 02:36:00.486202 | orchestrator | Thursday 19 March 2026 02:35:59 +0000 (0:00:02.449) 0:08:26.542 ******** 2026-03-19 02:36:00.486209 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:36:00.486215 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:36:00.486221 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:36:00.486227 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:00.486234 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:36:00.486240 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:36:00.486246 | orchestrator | 2026-03-19 02:36:00.486260 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-19 02:36:23.488710 | orchestrator | Thursday 19 March 2026 02:36:00 +0000 (0:00:01.389) 0:08:27.932 ******** 2026-03-19 02:36:23.488814 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:36:23.488828 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:36:23.488837 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:36:23.488849 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:36:23.488862 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:36:23.488871 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:36:23.488879 | orchestrator | 2026-03-19 02:36:23.488888 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-19 02:36:23.488897 | orchestrator | Thursday 19 March 2026 02:36:01 +0000 (0:00:01.170) 0:08:29.103 ******** 2026-03-19 02:36:23.488906 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:36:23.488916 | orchestrator | 2026-03-19 02:36:23.488924 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-19 02:36:23.488932 | orchestrator | Thursday 19 March 2026 02:36:02 +0000 (0:00:01.127) 0:08:30.230 ******** 2026-03-19 02:36:23.488940 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:36:23.488948 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:36:23.488956 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:36:23.488963 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:36:23.488987 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:36:23.488995 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:36:23.489003 | orchestrator | 2026-03-19 02:36:23.489011 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-19 02:36:23.489019 | orchestrator | Thursday 19 March 2026 02:36:04 +0000 (0:00:01.464) 0:08:31.695 ******** 2026-03-19 02:36:23.489026 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:36:23.489034 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:36:23.489051 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:36:23.489059 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:36:23.489067 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:36:23.489074 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:36:23.489082 | orchestrator | 2026-03-19 02:36:23.489091 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-19 02:36:23.489099 | orchestrator | Thursday 19 March 2026 02:36:07 +0000 (0:00:03.603) 0:08:35.299 ******** 2026-03-19 02:36:23.489125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:36:23.489133 | orchestrator | 2026-03-19 02:36:23.489174 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-19 02:36:23.489183 | orchestrator | Thursday 19 March 2026 02:36:09 +0000 (0:00:01.290) 0:08:36.589 ******** 2026-03-19 02:36:23.489191 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.489200 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.489208 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.489215 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:23.489223 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:23.489231 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:23.489239 | orchestrator | 2026-03-19 02:36:23.489249 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-19 02:36:23.489262 | orchestrator | Thursday 19 March 2026 02:36:09 +0000 (0:00:00.664) 0:08:37.253 ******** 2026-03-19 02:36:23.489276 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:36:23.489289 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:36:23.489302 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:36:23.489316 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:36:23.489329 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:36:23.489342 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:36:23.489357 | orchestrator | 2026-03-19 02:36:23.489370 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-19 02:36:23.489383 | orchestrator | Thursday 19 March 2026 02:36:12 +0000 (0:00:02.457) 0:08:39.711 ******** 2026-03-19 02:36:23.489396 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.489408 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.489423 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.489436 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:36:23.489448 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:36:23.489461 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:36:23.489473 | orchestrator | 2026-03-19 02:36:23.489486 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-19 02:36:23.489501 | orchestrator | 2026-03-19 02:36:23.489539 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:36:23.489555 | orchestrator | Thursday 19 March 2026 02:36:13 +0000 (0:00:00.908) 0:08:40.620 ******** 2026-03-19 02:36:23.489569 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:36:23.489581 | orchestrator | 2026-03-19 02:36:23.489593 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:36:23.489606 | orchestrator | Thursday 19 March 2026 02:36:13 +0000 (0:00:00.842) 0:08:41.463 ******** 2026-03-19 02:36:23.489620 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:36:23.489634 | orchestrator | 2026-03-19 02:36:23.489647 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:36:23.489676 | orchestrator | Thursday 19 March 2026 02:36:14 +0000 (0:00:00.549) 0:08:42.012 ******** 2026-03-19 02:36:23.489689 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.489702 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.489716 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.489728 | orchestrator | 2026-03-19 02:36:23.489742 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:36:23.489756 | orchestrator | Thursday 19 March 2026 02:36:15 +0000 (0:00:00.594) 0:08:42.606 ******** 2026-03-19 02:36:23.489767 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.489775 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.489783 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.489791 | orchestrator | 2026-03-19 02:36:23.489830 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:36:23.489839 | orchestrator | Thursday 19 March 2026 02:36:15 +0000 (0:00:00.725) 0:08:43.332 ******** 2026-03-19 02:36:23.489847 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.489855 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.489863 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.489883 | orchestrator | 2026-03-19 02:36:23.489892 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:36:23.489900 | orchestrator | Thursday 19 March 2026 02:36:16 +0000 (0:00:00.725) 0:08:44.057 ******** 2026-03-19 02:36:23.489908 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.489915 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.489923 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.489931 | orchestrator | 2026-03-19 02:36:23.489939 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:36:23.489947 | orchestrator | Thursday 19 March 2026 02:36:17 +0000 (0:00:00.963) 0:08:45.021 ******** 2026-03-19 02:36:23.489955 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.489963 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.489971 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.489979 | orchestrator | 2026-03-19 02:36:23.489987 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:36:23.489995 | orchestrator | Thursday 19 March 2026 02:36:17 +0000 (0:00:00.316) 0:08:45.337 ******** 2026-03-19 02:36:23.490003 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490010 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490068 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490077 | orchestrator | 2026-03-19 02:36:23.490085 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:36:23.490093 | orchestrator | Thursday 19 March 2026 02:36:18 +0000 (0:00:00.344) 0:08:45.682 ******** 2026-03-19 02:36:23.490101 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490109 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490116 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490124 | orchestrator | 2026-03-19 02:36:23.490132 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:36:23.490140 | orchestrator | Thursday 19 March 2026 02:36:18 +0000 (0:00:00.320) 0:08:46.002 ******** 2026-03-19 02:36:23.490148 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.490156 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.490164 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.490172 | orchestrator | 2026-03-19 02:36:23.490187 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:36:23.490195 | orchestrator | Thursday 19 March 2026 02:36:19 +0000 (0:00:00.986) 0:08:46.989 ******** 2026-03-19 02:36:23.490203 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.490211 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.490219 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.490227 | orchestrator | 2026-03-19 02:36:23.490235 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:36:23.490243 | orchestrator | Thursday 19 March 2026 02:36:20 +0000 (0:00:00.753) 0:08:47.742 ******** 2026-03-19 02:36:23.490251 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490259 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490266 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490274 | orchestrator | 2026-03-19 02:36:23.490282 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:36:23.490290 | orchestrator | Thursday 19 March 2026 02:36:20 +0000 (0:00:00.384) 0:08:48.126 ******** 2026-03-19 02:36:23.490298 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490306 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490314 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490322 | orchestrator | 2026-03-19 02:36:23.490335 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:36:23.490348 | orchestrator | Thursday 19 March 2026 02:36:20 +0000 (0:00:00.325) 0:08:48.452 ******** 2026-03-19 02:36:23.490362 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.490374 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.490382 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.490390 | orchestrator | 2026-03-19 02:36:23.490398 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:36:23.490413 | orchestrator | Thursday 19 March 2026 02:36:21 +0000 (0:00:00.590) 0:08:49.042 ******** 2026-03-19 02:36:23.490421 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.490429 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.490437 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.490445 | orchestrator | 2026-03-19 02:36:23.490453 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:36:23.490461 | orchestrator | Thursday 19 March 2026 02:36:21 +0000 (0:00:00.339) 0:08:49.382 ******** 2026-03-19 02:36:23.490469 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:36:23.490477 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:36:23.490485 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:36:23.490493 | orchestrator | 2026-03-19 02:36:23.490500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:36:23.490508 | orchestrator | Thursday 19 March 2026 02:36:22 +0000 (0:00:00.344) 0:08:49.727 ******** 2026-03-19 02:36:23.490549 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490558 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490565 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490573 | orchestrator | 2026-03-19 02:36:23.490581 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:36:23.490589 | orchestrator | Thursday 19 March 2026 02:36:22 +0000 (0:00:00.305) 0:08:50.032 ******** 2026-03-19 02:36:23.490597 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490605 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490613 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490621 | orchestrator | 2026-03-19 02:36:23.490629 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:36:23.490637 | orchestrator | Thursday 19 March 2026 02:36:23 +0000 (0:00:00.574) 0:08:50.607 ******** 2026-03-19 02:36:23.490645 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:36:23.490653 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:36:23.490661 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:36:23.490669 | orchestrator | 2026-03-19 02:36:23.490677 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:36:23.490694 | orchestrator | Thursday 19 March 2026 02:36:23 +0000 (0:00:00.333) 0:08:50.941 ******** 2026-03-19 02:37:02.210567 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:02.210672 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:02.210682 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:02.210690 | orchestrator | 2026-03-19 02:37:02.210698 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:37:02.210706 | orchestrator | Thursday 19 March 2026 02:36:23 +0000 (0:00:00.346) 0:08:51.287 ******** 2026-03-19 02:37:02.210713 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:02.210720 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:02.210727 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:02.210734 | orchestrator | 2026-03-19 02:37:02.210740 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-19 02:37:02.210747 | orchestrator | Thursday 19 March 2026 02:36:24 +0000 (0:00:00.832) 0:08:52.119 ******** 2026-03-19 02:37:02.210754 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:02.210762 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:02.210769 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-19 02:37:02.210777 | orchestrator | 2026-03-19 02:37:02.210784 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-19 02:37:02.210790 | orchestrator | Thursday 19 March 2026 02:36:25 +0000 (0:00:00.533) 0:08:52.653 ******** 2026-03-19 02:37:02.210797 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:37:02.210804 | orchestrator | 2026-03-19 02:37:02.210811 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-19 02:37:02.210817 | orchestrator | Thursday 19 March 2026 02:36:27 +0000 (0:00:02.303) 0:08:54.957 ******** 2026-03-19 02:37:02.210850 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-19 02:37:02.210860 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:02.210867 | orchestrator | 2026-03-19 02:37:02.210874 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-19 02:37:02.210880 | orchestrator | Thursday 19 March 2026 02:36:27 +0000 (0:00:00.253) 0:08:55.211 ******** 2026-03-19 02:37:02.210903 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:37:02.210917 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:37:02.210924 | orchestrator | 2026-03-19 02:37:02.210931 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-19 02:37:02.210937 | orchestrator | Thursday 19 March 2026 02:36:36 +0000 (0:00:08.252) 0:09:03.464 ******** 2026-03-19 02:37:02.210944 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 02:37:02.210951 | orchestrator | 2026-03-19 02:37:02.210958 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-19 02:37:02.210964 | orchestrator | Thursday 19 March 2026 02:36:39 +0000 (0:00:03.695) 0:09:07.159 ******** 2026-03-19 02:37:02.210971 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:02.210979 | orchestrator | 2026-03-19 02:37:02.210985 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-19 02:37:02.210992 | orchestrator | Thursday 19 March 2026 02:36:40 +0000 (0:00:00.827) 0:09:07.986 ******** 2026-03-19 02:37:02.210999 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 02:37:02.211006 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 02:37:02.211012 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 02:37:02.211019 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-19 02:37:02.211026 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-19 02:37:02.211032 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-19 02:37:02.211039 | orchestrator | 2026-03-19 02:37:02.211046 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-19 02:37:02.211052 | orchestrator | Thursday 19 March 2026 02:36:41 +0000 (0:00:01.046) 0:09:09.033 ******** 2026-03-19 02:37:02.211059 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:37:02.211066 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:37:02.211074 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:37:02.211082 | orchestrator | 2026-03-19 02:37:02.211090 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-19 02:37:02.211098 | orchestrator | Thursday 19 March 2026 02:36:43 +0000 (0:00:02.237) 0:09:11.271 ******** 2026-03-19 02:37:02.211106 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:37:02.211115 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:37:02.211122 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:37:02.211130 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 02:37:02.211138 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211146 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:37:02.211159 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211180 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 02:37:02.211189 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211196 | orchestrator | 2026-03-19 02:37:02.211204 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-19 02:37:02.211212 | orchestrator | Thursday 19 March 2026 02:36:44 +0000 (0:00:01.145) 0:09:12.416 ******** 2026-03-19 02:37:02.211219 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211227 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211234 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211242 | orchestrator | 2026-03-19 02:37:02.211249 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-19 02:37:02.211257 | orchestrator | Thursday 19 March 2026 02:36:47 +0000 (0:00:03.008) 0:09:15.424 ******** 2026-03-19 02:37:02.211265 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:02.211272 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:02.211280 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:02.211288 | orchestrator | 2026-03-19 02:37:02.211296 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-19 02:37:02.211303 | orchestrator | Thursday 19 March 2026 02:36:48 +0000 (0:00:00.403) 0:09:15.828 ******** 2026-03-19 02:37:02.211311 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:02.211319 | orchestrator | 2026-03-19 02:37:02.211327 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-19 02:37:02.211334 | orchestrator | Thursday 19 March 2026 02:36:49 +0000 (0:00:00.787) 0:09:16.616 ******** 2026-03-19 02:37:02.211342 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:02.211350 | orchestrator | 2026-03-19 02:37:02.211357 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-19 02:37:02.211365 | orchestrator | Thursday 19 March 2026 02:36:49 +0000 (0:00:00.570) 0:09:17.186 ******** 2026-03-19 02:37:02.211372 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211380 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211388 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211395 | orchestrator | 2026-03-19 02:37:02.211407 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-19 02:37:02.211415 | orchestrator | Thursday 19 March 2026 02:36:51 +0000 (0:00:01.321) 0:09:18.507 ******** 2026-03-19 02:37:02.211423 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211431 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211438 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211445 | orchestrator | 2026-03-19 02:37:02.211452 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-19 02:37:02.211458 | orchestrator | Thursday 19 March 2026 02:36:52 +0000 (0:00:01.538) 0:09:20.046 ******** 2026-03-19 02:37:02.211465 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211472 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211478 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211485 | orchestrator | 2026-03-19 02:37:02.211491 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-19 02:37:02.211498 | orchestrator | Thursday 19 March 2026 02:36:54 +0000 (0:00:01.950) 0:09:21.996 ******** 2026-03-19 02:37:02.211524 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211531 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211537 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211544 | orchestrator | 2026-03-19 02:37:02.211551 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-19 02:37:02.211557 | orchestrator | Thursday 19 March 2026 02:36:56 +0000 (0:00:02.064) 0:09:24.061 ******** 2026-03-19 02:37:02.211564 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:02.211571 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:02.211582 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:02.211589 | orchestrator | 2026-03-19 02:37:02.211596 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:37:02.211602 | orchestrator | Thursday 19 March 2026 02:36:58 +0000 (0:00:01.540) 0:09:25.602 ******** 2026-03-19 02:37:02.211609 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211615 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211622 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211628 | orchestrator | 2026-03-19 02:37:02.211635 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-19 02:37:02.211642 | orchestrator | Thursday 19 March 2026 02:36:58 +0000 (0:00:00.724) 0:09:26.327 ******** 2026-03-19 02:37:02.211648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:02.211655 | orchestrator | 2026-03-19 02:37:02.211661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-19 02:37:02.211668 | orchestrator | Thursday 19 March 2026 02:36:59 +0000 (0:00:00.854) 0:09:27.181 ******** 2026-03-19 02:37:02.211675 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:02.211681 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:02.211688 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:02.211694 | orchestrator | 2026-03-19 02:37:02.211701 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-19 02:37:02.211708 | orchestrator | Thursday 19 March 2026 02:37:00 +0000 (0:00:00.328) 0:09:27.510 ******** 2026-03-19 02:37:02.211714 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:02.211721 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:02.211728 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:02.211734 | orchestrator | 2026-03-19 02:37:02.211741 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-19 02:37:02.211748 | orchestrator | Thursday 19 March 2026 02:37:01 +0000 (0:00:01.271) 0:09:28.781 ******** 2026-03-19 02:37:02.211754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:37:02.211761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:37:02.211768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:37:02.211775 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:02.211781 | orchestrator | 2026-03-19 02:37:02.211793 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-19 02:37:21.124838 | orchestrator | Thursday 19 March 2026 02:37:02 +0000 (0:00:00.879) 0:09:29.661 ******** 2026-03-19 02:37:21.124960 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.124972 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.124980 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.124986 | orchestrator | 2026-03-19 02:37:21.124994 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-19 02:37:21.125000 | orchestrator | 2026-03-19 02:37:21.125007 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 02:37:21.125014 | orchestrator | Thursday 19 March 2026 02:37:03 +0000 (0:00:00.879) 0:09:30.541 ******** 2026-03-19 02:37:21.125021 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:21.125029 | orchestrator | 2026-03-19 02:37:21.125035 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 02:37:21.125041 | orchestrator | Thursday 19 March 2026 02:37:03 +0000 (0:00:00.590) 0:09:31.132 ******** 2026-03-19 02:37:21.125048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:21.125054 | orchestrator | 2026-03-19 02:37:21.125060 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 02:37:21.125067 | orchestrator | Thursday 19 March 2026 02:37:04 +0000 (0:00:00.797) 0:09:31.929 ******** 2026-03-19 02:37:21.125073 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125102 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125108 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125114 | orchestrator | 2026-03-19 02:37:21.125120 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 02:37:21.125127 | orchestrator | Thursday 19 March 2026 02:37:04 +0000 (0:00:00.339) 0:09:32.269 ******** 2026-03-19 02:37:21.125133 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125139 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125145 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125151 | orchestrator | 2026-03-19 02:37:21.125157 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 02:37:21.125179 | orchestrator | Thursday 19 March 2026 02:37:05 +0000 (0:00:00.790) 0:09:33.059 ******** 2026-03-19 02:37:21.125185 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125192 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125198 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125204 | orchestrator | 2026-03-19 02:37:21.125210 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 02:37:21.125216 | orchestrator | Thursday 19 March 2026 02:37:06 +0000 (0:00:01.033) 0:09:34.093 ******** 2026-03-19 02:37:21.125222 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125228 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125234 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125240 | orchestrator | 2026-03-19 02:37:21.125247 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 02:37:21.125253 | orchestrator | Thursday 19 March 2026 02:37:07 +0000 (0:00:00.742) 0:09:34.836 ******** 2026-03-19 02:37:21.125259 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125265 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125271 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125277 | orchestrator | 2026-03-19 02:37:21.125284 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 02:37:21.125290 | orchestrator | Thursday 19 March 2026 02:37:07 +0000 (0:00:00.346) 0:09:35.182 ******** 2026-03-19 02:37:21.125296 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125303 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125309 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125315 | orchestrator | 2026-03-19 02:37:21.125321 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 02:37:21.125327 | orchestrator | Thursday 19 March 2026 02:37:08 +0000 (0:00:00.314) 0:09:35.497 ******** 2026-03-19 02:37:21.125333 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125339 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125346 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125352 | orchestrator | 2026-03-19 02:37:21.125358 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 02:37:21.125364 | orchestrator | Thursday 19 March 2026 02:37:08 +0000 (0:00:00.579) 0:09:36.076 ******** 2026-03-19 02:37:21.125371 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125378 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125385 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125392 | orchestrator | 2026-03-19 02:37:21.125399 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 02:37:21.125406 | orchestrator | Thursday 19 March 2026 02:37:09 +0000 (0:00:00.766) 0:09:36.843 ******** 2026-03-19 02:37:21.125414 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125425 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125442 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125452 | orchestrator | 2026-03-19 02:37:21.125463 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 02:37:21.125473 | orchestrator | Thursday 19 March 2026 02:37:10 +0000 (0:00:00.759) 0:09:37.602 ******** 2026-03-19 02:37:21.125483 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125492 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125526 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125548 | orchestrator | 2026-03-19 02:37:21.125558 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 02:37:21.125568 | orchestrator | Thursday 19 March 2026 02:37:10 +0000 (0:00:00.303) 0:09:37.906 ******** 2026-03-19 02:37:21.125577 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125589 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125600 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.125610 | orchestrator | 2026-03-19 02:37:21.125622 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 02:37:21.125633 | orchestrator | Thursday 19 March 2026 02:37:11 +0000 (0:00:00.571) 0:09:38.477 ******** 2026-03-19 02:37:21.125643 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125654 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125665 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125676 | orchestrator | 2026-03-19 02:37:21.125706 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 02:37:21.125718 | orchestrator | Thursday 19 March 2026 02:37:11 +0000 (0:00:00.343) 0:09:38.820 ******** 2026-03-19 02:37:21.125729 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125740 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125751 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125761 | orchestrator | 2026-03-19 02:37:21.125773 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 02:37:21.125783 | orchestrator | Thursday 19 March 2026 02:37:11 +0000 (0:00:00.347) 0:09:39.167 ******** 2026-03-19 02:37:21.125793 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.125804 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.125814 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.125824 | orchestrator | 2026-03-19 02:37:21.125834 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 02:37:21.125964 | orchestrator | Thursday 19 March 2026 02:37:12 +0000 (0:00:00.333) 0:09:39.501 ******** 2026-03-19 02:37:21.125977 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.125988 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.125998 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.126009 | orchestrator | 2026-03-19 02:37:21.126082 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 02:37:21.126092 | orchestrator | Thursday 19 March 2026 02:37:12 +0000 (0:00:00.551) 0:09:40.053 ******** 2026-03-19 02:37:21.126103 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.126113 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.126123 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.126133 | orchestrator | 2026-03-19 02:37:21.126143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 02:37:21.126153 | orchestrator | Thursday 19 March 2026 02:37:12 +0000 (0:00:00.301) 0:09:40.354 ******** 2026-03-19 02:37:21.126163 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.126173 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.126183 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.126193 | orchestrator | 2026-03-19 02:37:21.126203 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 02:37:21.126212 | orchestrator | Thursday 19 March 2026 02:37:13 +0000 (0:00:00.318) 0:09:40.673 ******** 2026-03-19 02:37:21.126233 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.126243 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.126252 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.126262 | orchestrator | 2026-03-19 02:37:21.126272 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 02:37:21.126282 | orchestrator | Thursday 19 March 2026 02:37:13 +0000 (0:00:00.325) 0:09:40.999 ******** 2026-03-19 02:37:21.126292 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:37:21.126301 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:37:21.126311 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:37:21.126321 | orchestrator | 2026-03-19 02:37:21.126331 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-19 02:37:21.126352 | orchestrator | Thursday 19 March 2026 02:37:14 +0000 (0:00:00.806) 0:09:41.805 ******** 2026-03-19 02:37:21.126363 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:21.126374 | orchestrator | 2026-03-19 02:37:21.126384 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 02:37:21.126394 | orchestrator | Thursday 19 March 2026 02:37:14 +0000 (0:00:00.574) 0:09:42.380 ******** 2026-03-19 02:37:21.126403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:37:21.126413 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:37:21.126423 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:37:21.126433 | orchestrator | 2026-03-19 02:37:21.126443 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 02:37:21.126453 | orchestrator | Thursday 19 March 2026 02:37:17 +0000 (0:00:02.635) 0:09:45.015 ******** 2026-03-19 02:37:21.126463 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:37:21.126473 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 02:37:21.126483 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:37:21.126493 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:37:21.126585 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 02:37:21.126598 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:37:21.126608 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:37:21.126618 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 02:37:21.126628 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:37:21.126638 | orchestrator | 2026-03-19 02:37:21.126647 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-19 02:37:21.126657 | orchestrator | Thursday 19 March 2026 02:37:19 +0000 (0:00:01.493) 0:09:46.509 ******** 2026-03-19 02:37:21.126666 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:37:21.126676 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:37:21.126685 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:37:21.126695 | orchestrator | 2026-03-19 02:37:21.126706 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-19 02:37:21.126716 | orchestrator | Thursday 19 March 2026 02:37:19 +0000 (0:00:00.338) 0:09:46.847 ******** 2026-03-19 02:37:21.126726 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:37:21.126736 | orchestrator | 2026-03-19 02:37:21.126747 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-19 02:37:21.126757 | orchestrator | Thursday 19 March 2026 02:37:19 +0000 (0:00:00.573) 0:09:47.421 ******** 2026-03-19 02:37:21.126768 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 02:37:21.126797 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 02:38:13.162638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 02:38:13.162762 | orchestrator | 2026-03-19 02:38:13.162776 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-19 02:38:13.162785 | orchestrator | Thursday 19 March 2026 02:37:21 +0000 (0:00:01.156) 0:09:48.577 ******** 2026-03-19 02:38:13.162792 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162801 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 02:38:13.162808 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162840 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 02:38:13.162848 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162855 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 02:38:13.162861 | orchestrator | 2026-03-19 02:38:13.162868 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 02:38:13.162875 | orchestrator | Thursday 19 March 2026 02:37:25 +0000 (0:00:04.380) 0:09:52.957 ******** 2026-03-19 02:38:13.162882 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162889 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:38:13.162896 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162916 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:38:13.162923 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:38:13.162930 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:38:13.162937 | orchestrator | 2026-03-19 02:38:13.162943 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 02:38:13.162950 | orchestrator | Thursday 19 March 2026 02:37:28 +0000 (0:00:03.249) 0:09:56.207 ******** 2026-03-19 02:38:13.162957 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:38:13.162964 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:38:13.162972 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:38:13.162978 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:38:13.162985 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:38:13.162991 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:38:13.162998 | orchestrator | 2026-03-19 02:38:13.163005 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-19 02:38:13.163011 | orchestrator | Thursday 19 March 2026 02:37:30 +0000 (0:00:01.518) 0:09:57.725 ******** 2026-03-19 02:38:13.163018 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-19 02:38:13.163024 | orchestrator | 2026-03-19 02:38:13.163031 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-19 02:38:13.163037 | orchestrator | Thursday 19 March 2026 02:37:30 +0000 (0:00:00.246) 0:09:57.972 ******** 2026-03-19 02:38:13.163044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163078 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:13.163085 | orchestrator | 2026-03-19 02:38:13.163091 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-19 02:38:13.163098 | orchestrator | Thursday 19 March 2026 02:37:31 +0000 (0:00:00.616) 0:09:58.588 ******** 2026-03-19 02:38:13.163105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 02:38:13.163145 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:13.163152 | orchestrator | 2026-03-19 02:38:13.163174 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-19 02:38:13.163181 | orchestrator | Thursday 19 March 2026 02:37:31 +0000 (0:00:00.618) 0:09:59.207 ******** 2026-03-19 02:38:13.163188 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 02:38:13.163195 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 02:38:13.163202 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 02:38:13.163209 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 02:38:13.163216 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 02:38:13.163222 | orchestrator | 2026-03-19 02:38:13.163229 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-19 02:38:13.163236 | orchestrator | Thursday 19 March 2026 02:38:02 +0000 (0:00:30.452) 0:10:29.659 ******** 2026-03-19 02:38:13.163243 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:13.163249 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:13.163332 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:13.163348 | orchestrator | 2026-03-19 02:38:13.163358 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-19 02:38:13.163369 | orchestrator | Thursday 19 March 2026 02:38:02 +0000 (0:00:00.379) 0:10:30.039 ******** 2026-03-19 02:38:13.163385 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:13.163396 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:13.163406 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:13.163416 | orchestrator | 2026-03-19 02:38:13.163426 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-19 02:38:13.163437 | orchestrator | Thursday 19 March 2026 02:38:02 +0000 (0:00:00.344) 0:10:30.384 ******** 2026-03-19 02:38:13.163446 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:38:13.163456 | orchestrator | 2026-03-19 02:38:13.163466 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-19 02:38:13.163476 | orchestrator | Thursday 19 March 2026 02:38:03 +0000 (0:00:00.853) 0:10:31.238 ******** 2026-03-19 02:38:13.163486 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:38:13.163562 | orchestrator | 2026-03-19 02:38:13.163574 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-19 02:38:13.163585 | orchestrator | Thursday 19 March 2026 02:38:04 +0000 (0:00:00.544) 0:10:31.782 ******** 2026-03-19 02:38:13.163595 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:38:13.163601 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:38:13.163608 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:38:13.163614 | orchestrator | 2026-03-19 02:38:13.163621 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-19 02:38:13.163636 | orchestrator | Thursday 19 March 2026 02:38:05 +0000 (0:00:01.631) 0:10:33.414 ******** 2026-03-19 02:38:13.163642 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:38:13.163649 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:38:13.163655 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:38:13.163662 | orchestrator | 2026-03-19 02:38:13.163668 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-19 02:38:13.163675 | orchestrator | Thursday 19 March 2026 02:38:07 +0000 (0:00:01.218) 0:10:34.632 ******** 2026-03-19 02:38:13.163682 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:38:13.163688 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:38:13.163695 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:38:13.163701 | orchestrator | 2026-03-19 02:38:13.163708 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-19 02:38:13.163714 | orchestrator | Thursday 19 March 2026 02:38:09 +0000 (0:00:01.995) 0:10:36.627 ******** 2026-03-19 02:38:13.163721 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 02:38:13.163728 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 02:38:13.163735 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 02:38:13.163741 | orchestrator | 2026-03-19 02:38:13.163748 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-19 02:38:13.163755 | orchestrator | Thursday 19 March 2026 02:38:11 +0000 (0:00:02.800) 0:10:39.427 ******** 2026-03-19 02:38:13.163762 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:13.163768 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:13.163775 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:13.163782 | orchestrator | 2026-03-19 02:38:13.163788 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-19 02:38:13.163795 | orchestrator | Thursday 19 March 2026 02:38:12 +0000 (0:00:00.356) 0:10:39.783 ******** 2026-03-19 02:38:13.163802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:38:13.163808 | orchestrator | 2026-03-19 02:38:13.163824 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-19 02:38:15.741304 | orchestrator | Thursday 19 March 2026 02:38:13 +0000 (0:00:00.830) 0:10:40.614 ******** 2026-03-19 02:38:15.741393 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:15.741401 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:15.741407 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:15.741412 | orchestrator | 2026-03-19 02:38:15.741418 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-19 02:38:15.741423 | orchestrator | Thursday 19 March 2026 02:38:13 +0000 (0:00:00.324) 0:10:40.939 ******** 2026-03-19 02:38:15.741429 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:15.741435 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:15.741440 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:15.741445 | orchestrator | 2026-03-19 02:38:15.741450 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-19 02:38:15.741455 | orchestrator | Thursday 19 March 2026 02:38:13 +0000 (0:00:00.337) 0:10:41.276 ******** 2026-03-19 02:38:15.741460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:38:15.741465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:38:15.741470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:38:15.741475 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:15.741480 | orchestrator | 2026-03-19 02:38:15.741484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-19 02:38:15.741489 | orchestrator | Thursday 19 March 2026 02:38:14 +0000 (0:00:00.877) 0:10:42.153 ******** 2026-03-19 02:38:15.741566 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:15.741574 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:15.741582 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:15.741590 | orchestrator | 2026-03-19 02:38:15.741598 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:38:15.741608 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-19 02:38:15.741634 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-19 02:38:15.741642 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-19 02:38:15.741651 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-19 02:38:15.741660 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-19 02:38:15.741668 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-19 02:38:15.741676 | orchestrator | 2026-03-19 02:38:15.741684 | orchestrator | 2026-03-19 02:38:15.741693 | orchestrator | 2026-03-19 02:38:15.741701 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:38:15.741707 | orchestrator | Thursday 19 March 2026 02:38:15 +0000 (0:00:00.562) 0:10:42.716 ******** 2026-03-19 02:38:15.741711 | orchestrator | =============================================================================== 2026-03-19 02:38:15.741716 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.85s 2026-03-19 02:38:15.741721 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.50s 2026-03-19 02:38:15.741726 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.45s 2026-03-19 02:38:15.741730 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.43s 2026-03-19 02:38:15.741735 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.02s 2026-03-19 02:38:15.741740 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.21s 2026-03-19 02:38:15.741745 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.71s 2026-03-19 02:38:15.741749 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.86s 2026-03-19 02:38:15.741754 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.16s 2026-03-19 02:38:15.741758 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.25s 2026-03-19 02:38:15.741763 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.07s 2026-03-19 02:38:15.741768 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2026-03-19 02:38:15.741772 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.27s 2026-03-19 02:38:15.741777 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.38s 2026-03-19 02:38:15.741782 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.23s 2026-03-19 02:38:15.741787 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.73s 2026-03-19 02:38:15.741792 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.70s 2026-03-19 02:38:15.741797 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.60s 2026-03-19 02:38:15.741801 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.52s 2026-03-19 02:38:15.741806 | orchestrator | ceph-rgw : Get keys from monitors --------------------------------------- 3.25s 2026-03-19 02:38:18.049056 | orchestrator | 2026-03-19 02:38:18 | INFO  | Task c90962df-6440-43c6-af5f-eb31b674d8f6 (ceph-pools) was prepared for execution. 2026-03-19 02:38:18.049148 | orchestrator | 2026-03-19 02:38:18 | INFO  | It takes a moment until task c90962df-6440-43c6-af5f-eb31b674d8f6 (ceph-pools) has been started and output is visible here. 2026-03-19 02:38:32.290449 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 02:38:32.290618 | orchestrator | 2.16.14 2026-03-19 02:38:32.290636 | orchestrator | 2026-03-19 02:38:32.290647 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-19 02:38:32.290657 | orchestrator | 2026-03-19 02:38:32.291468 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 02:38:32.291538 | orchestrator | Thursday 19 March 2026 02:38:22 +0000 (0:00:00.609) 0:00:00.609 ******** 2026-03-19 02:38:32.291545 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:38:32.291550 | orchestrator | 2026-03-19 02:38:32.291554 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 02:38:32.291558 | orchestrator | Thursday 19 March 2026 02:38:23 +0000 (0:00:00.649) 0:00:01.258 ******** 2026-03-19 02:38:32.291563 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291567 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291571 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291575 | orchestrator | 2026-03-19 02:38:32.291579 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 02:38:32.291583 | orchestrator | Thursday 19 March 2026 02:38:23 +0000 (0:00:00.661) 0:00:01.920 ******** 2026-03-19 02:38:32.291587 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291591 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291595 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291598 | orchestrator | 2026-03-19 02:38:32.291602 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 02:38:32.291606 | orchestrator | Thursday 19 March 2026 02:38:24 +0000 (0:00:00.300) 0:00:02.220 ******** 2026-03-19 02:38:32.291626 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291630 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291634 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291637 | orchestrator | 2026-03-19 02:38:32.291641 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 02:38:32.291645 | orchestrator | Thursday 19 March 2026 02:38:25 +0000 (0:00:00.879) 0:00:03.099 ******** 2026-03-19 02:38:32.291649 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291653 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291656 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291660 | orchestrator | 2026-03-19 02:38:32.291664 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 02:38:32.291668 | orchestrator | Thursday 19 March 2026 02:38:25 +0000 (0:00:00.333) 0:00:03.433 ******** 2026-03-19 02:38:32.291672 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291675 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291679 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291683 | orchestrator | 2026-03-19 02:38:32.291687 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 02:38:32.291691 | orchestrator | Thursday 19 March 2026 02:38:25 +0000 (0:00:00.345) 0:00:03.778 ******** 2026-03-19 02:38:32.291695 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291698 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291702 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291706 | orchestrator | 2026-03-19 02:38:32.291710 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 02:38:32.291714 | orchestrator | Thursday 19 March 2026 02:38:26 +0000 (0:00:00.393) 0:00:04.171 ******** 2026-03-19 02:38:32.291718 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:32.291723 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:32.291727 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:32.291752 | orchestrator | 2026-03-19 02:38:32.291756 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 02:38:32.291760 | orchestrator | Thursday 19 March 2026 02:38:26 +0000 (0:00:00.492) 0:00:04.664 ******** 2026-03-19 02:38:32.291764 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291768 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291771 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291775 | orchestrator | 2026-03-19 02:38:32.291781 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 02:38:32.291787 | orchestrator | Thursday 19 March 2026 02:38:26 +0000 (0:00:00.297) 0:00:04.961 ******** 2026-03-19 02:38:32.291794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:38:32.291799 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:38:32.291802 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:38:32.291806 | orchestrator | 2026-03-19 02:38:32.291810 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 02:38:32.291814 | orchestrator | Thursday 19 March 2026 02:38:27 +0000 (0:00:00.664) 0:00:05.626 ******** 2026-03-19 02:38:32.291817 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:32.291821 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:32.291825 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:32.291829 | orchestrator | 2026-03-19 02:38:32.291832 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 02:38:32.291836 | orchestrator | Thursday 19 March 2026 02:38:28 +0000 (0:00:00.454) 0:00:06.081 ******** 2026-03-19 02:38:32.291840 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:38:32.291844 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:38:32.291848 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:38:32.291851 | orchestrator | 2026-03-19 02:38:32.291855 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 02:38:32.291859 | orchestrator | Thursday 19 March 2026 02:38:30 +0000 (0:00:02.192) 0:00:08.273 ******** 2026-03-19 02:38:32.291863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 02:38:32.291867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 02:38:32.291870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 02:38:32.291874 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:32.291878 | orchestrator | 2026-03-19 02:38:32.291900 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 02:38:32.291905 | orchestrator | Thursday 19 March 2026 02:38:30 +0000 (0:00:00.636) 0:00:08.910 ******** 2026-03-19 02:38:32.291910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291924 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:32.291928 | orchestrator | 2026-03-19 02:38:32.291932 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 02:38:32.291936 | orchestrator | Thursday 19 March 2026 02:38:31 +0000 (0:00:01.072) 0:00:09.982 ******** 2026-03-19 02:38:32.291951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:32.291965 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:32.291969 | orchestrator | 2026-03-19 02:38:32.291973 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 02:38:32.291977 | orchestrator | Thursday 19 March 2026 02:38:32 +0000 (0:00:00.164) 0:00:10.146 ******** 2026-03-19 02:38:32.291983 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e6aaaabd2759', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 02:38:28.885027', 'end': '2026-03-19 02:38:28.942288', 'delta': '0:00:00.057261', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e6aaaabd2759'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 02:38:32.291989 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7d1c29d08d66', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 02:38:29.455482', 'end': '2026-03-19 02:38:29.502845', 'delta': '0:00:00.047363', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d1c29d08d66'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 02:38:32.291998 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '115813b5cae5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 02:38:30.002933', 'end': '2026-03-19 02:38:30.050950', 'delta': '0:00:00.048017', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['115813b5cae5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 02:38:39.248550 | orchestrator | 2026-03-19 02:38:39.248699 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 02:38:39.248767 | orchestrator | Thursday 19 March 2026 02:38:32 +0000 (0:00:00.212) 0:00:10.358 ******** 2026-03-19 02:38:39.248789 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:39.248806 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:39.248817 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:39.248828 | orchestrator | 2026-03-19 02:38:39.248839 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 02:38:39.248859 | orchestrator | Thursday 19 March 2026 02:38:32 +0000 (0:00:00.442) 0:00:10.801 ******** 2026-03-19 02:38:39.248898 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-19 02:38:39.248917 | orchestrator | 2026-03-19 02:38:39.248934 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 02:38:39.248951 | orchestrator | Thursday 19 March 2026 02:38:34 +0000 (0:00:01.838) 0:00:12.640 ******** 2026-03-19 02:38:39.248968 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.248986 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249002 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249019 | orchestrator | 2026-03-19 02:38:39.249038 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 02:38:39.249056 | orchestrator | Thursday 19 March 2026 02:38:34 +0000 (0:00:00.304) 0:00:12.945 ******** 2026-03-19 02:38:39.249076 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249096 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249110 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249123 | orchestrator | 2026-03-19 02:38:39.249135 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 02:38:39.249147 | orchestrator | Thursday 19 March 2026 02:38:35 +0000 (0:00:00.790) 0:00:13.735 ******** 2026-03-19 02:38:39.249159 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249172 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249184 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249196 | orchestrator | 2026-03-19 02:38:39.249208 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 02:38:39.249221 | orchestrator | Thursday 19 March 2026 02:38:35 +0000 (0:00:00.308) 0:00:14.044 ******** 2026-03-19 02:38:39.249233 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:39.249252 | orchestrator | 2026-03-19 02:38:39.249271 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 02:38:39.249289 | orchestrator | Thursday 19 March 2026 02:38:36 +0000 (0:00:00.135) 0:00:14.179 ******** 2026-03-19 02:38:39.249308 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249326 | orchestrator | 2026-03-19 02:38:39.249345 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 02:38:39.249363 | orchestrator | Thursday 19 March 2026 02:38:36 +0000 (0:00:00.228) 0:00:14.408 ******** 2026-03-19 02:38:39.249380 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249397 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249414 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249431 | orchestrator | 2026-03-19 02:38:39.249447 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 02:38:39.249463 | orchestrator | Thursday 19 March 2026 02:38:36 +0000 (0:00:00.300) 0:00:14.709 ******** 2026-03-19 02:38:39.249481 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249688 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249709 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249727 | orchestrator | 2026-03-19 02:38:39.249745 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 02:38:39.249763 | orchestrator | Thursday 19 March 2026 02:38:36 +0000 (0:00:00.335) 0:00:15.045 ******** 2026-03-19 02:38:39.249782 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249799 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249817 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249834 | orchestrator | 2026-03-19 02:38:39.249876 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 02:38:39.249896 | orchestrator | Thursday 19 March 2026 02:38:37 +0000 (0:00:00.537) 0:00:15.582 ******** 2026-03-19 02:38:39.249916 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.249936 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.249955 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.249973 | orchestrator | 2026-03-19 02:38:39.249991 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 02:38:39.250010 | orchestrator | Thursday 19 March 2026 02:38:37 +0000 (0:00:00.332) 0:00:15.915 ******** 2026-03-19 02:38:39.250114 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.250134 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.250153 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.250172 | orchestrator | 2026-03-19 02:38:39.250190 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 02:38:39.250208 | orchestrator | Thursday 19 March 2026 02:38:38 +0000 (0:00:00.332) 0:00:16.248 ******** 2026-03-19 02:38:39.250226 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.250245 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.250262 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.250279 | orchestrator | 2026-03-19 02:38:39.250297 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 02:38:39.250315 | orchestrator | Thursday 19 March 2026 02:38:38 +0000 (0:00:00.528) 0:00:16.776 ******** 2026-03-19 02:38:39.250333 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.250352 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.250371 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.250391 | orchestrator | 2026-03-19 02:38:39.250409 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 02:38:39.250426 | orchestrator | Thursday 19 March 2026 02:38:39 +0000 (0:00:00.336) 0:00:17.113 ******** 2026-03-19 02:38:39.250486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.250740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.349199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.349219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.349251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.349283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.349315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.349371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.526441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.526688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.526763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.526795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.526849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.526872 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.526906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.526943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.526965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.526985 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:39.527006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.527025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.527046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.527081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-19 02:38:39.766981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.767004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.767015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.767025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.767036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-19 02:38:39.767047 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:39.767057 | orchestrator | 2026-03-19 02:38:39.767067 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 02:38:39.767082 | orchestrator | Thursday 19 March 2026 02:38:39 +0000 (0:00:00.626) 0:00:17.739 ******** 2026-03-19 02:38:39.767114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.880873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.881351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.989839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.989939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.989953 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.989964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990181 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:39.990199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:39.990270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.066601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-17-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266214 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:40.266227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:40.266359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:50.560468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:50.560666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-19-01-18-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-19 02:38:50.560716 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.560733 | orchestrator | 2026-03-19 02:38:50.560745 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 02:38:50.560758 | orchestrator | Thursday 19 March 2026 02:38:40 +0000 (0:00:00.591) 0:00:18.331 ******** 2026-03-19 02:38:50.560769 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:50.560780 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:50.560791 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:50.560802 | orchestrator | 2026-03-19 02:38:50.560813 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 02:38:50.560824 | orchestrator | Thursday 19 March 2026 02:38:41 +0000 (0:00:00.868) 0:00:19.200 ******** 2026-03-19 02:38:50.560835 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:50.560845 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:50.560856 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:50.560867 | orchestrator | 2026-03-19 02:38:50.560878 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 02:38:50.560889 | orchestrator | Thursday 19 March 2026 02:38:41 +0000 (0:00:00.306) 0:00:19.506 ******** 2026-03-19 02:38:50.560908 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:50.560948 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:50.560968 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:50.560986 | orchestrator | 2026-03-19 02:38:50.561005 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 02:38:50.561023 | orchestrator | Thursday 19 March 2026 02:38:42 +0000 (0:00:00.688) 0:00:20.195 ******** 2026-03-19 02:38:50.561042 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.561061 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.561078 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.561095 | orchestrator | 2026-03-19 02:38:50.561113 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 02:38:50.561130 | orchestrator | Thursday 19 March 2026 02:38:42 +0000 (0:00:00.314) 0:00:20.510 ******** 2026-03-19 02:38:50.561146 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.561163 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.561181 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.561199 | orchestrator | 2026-03-19 02:38:50.561216 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 02:38:50.561234 | orchestrator | Thursday 19 March 2026 02:38:43 +0000 (0:00:00.703) 0:00:21.214 ******** 2026-03-19 02:38:50.561252 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.561271 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.561287 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.561334 | orchestrator | 2026-03-19 02:38:50.561352 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 02:38:50.561370 | orchestrator | Thursday 19 March 2026 02:38:43 +0000 (0:00:00.323) 0:00:21.537 ******** 2026-03-19 02:38:50.561390 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 02:38:50.561408 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 02:38:50.561427 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 02:38:50.561438 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 02:38:50.561449 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 02:38:50.561474 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 02:38:50.561508 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 02:38:50.561519 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 02:38:50.561531 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 02:38:50.561542 | orchestrator | 2026-03-19 02:38:50.561552 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 02:38:50.561567 | orchestrator | Thursday 19 March 2026 02:38:44 +0000 (0:00:01.143) 0:00:22.681 ******** 2026-03-19 02:38:50.561613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 02:38:50.561635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 02:38:50.561653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 02:38:50.561673 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.561692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 02:38:50.561712 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 02:38:50.561733 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 02:38:50.561752 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.561769 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 02:38:50.561787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 02:38:50.561804 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 02:38:50.561823 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.561925 | orchestrator | 2026-03-19 02:38:50.561956 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 02:38:50.561975 | orchestrator | Thursday 19 March 2026 02:38:44 +0000 (0:00:00.342) 0:00:23.024 ******** 2026-03-19 02:38:50.561995 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:38:50.562094 | orchestrator | 2026-03-19 02:38:50.562110 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 02:38:50.562123 | orchestrator | Thursday 19 March 2026 02:38:45 +0000 (0:00:00.763) 0:00:23.787 ******** 2026-03-19 02:38:50.562134 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562145 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.562156 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.562167 | orchestrator | 2026-03-19 02:38:50.562178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 02:38:50.562228 | orchestrator | Thursday 19 March 2026 02:38:46 +0000 (0:00:00.357) 0:00:24.144 ******** 2026-03-19 02:38:50.562241 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562252 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.562263 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.562273 | orchestrator | 2026-03-19 02:38:50.562284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 02:38:50.562295 | orchestrator | Thursday 19 March 2026 02:38:46 +0000 (0:00:00.311) 0:00:24.456 ******** 2026-03-19 02:38:50.562306 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562317 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:38:50.562328 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:38:50.562339 | orchestrator | 2026-03-19 02:38:50.562350 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 02:38:50.562361 | orchestrator | Thursday 19 March 2026 02:38:46 +0000 (0:00:00.525) 0:00:24.981 ******** 2026-03-19 02:38:50.562372 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:50.562383 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:50.562393 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:50.562404 | orchestrator | 2026-03-19 02:38:50.562415 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 02:38:50.562426 | orchestrator | Thursday 19 March 2026 02:38:47 +0000 (0:00:00.451) 0:00:25.433 ******** 2026-03-19 02:38:50.562450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:38:50.562471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:38:50.562482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:38:50.562531 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562543 | orchestrator | 2026-03-19 02:38:50.562554 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 02:38:50.562565 | orchestrator | Thursday 19 March 2026 02:38:47 +0000 (0:00:00.398) 0:00:25.831 ******** 2026-03-19 02:38:50.562576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:38:50.562587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:38:50.562598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:38:50.562609 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562619 | orchestrator | 2026-03-19 02:38:50.562630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 02:38:50.562642 | orchestrator | Thursday 19 March 2026 02:38:48 +0000 (0:00:00.386) 0:00:26.217 ******** 2026-03-19 02:38:50.562652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 02:38:50.562664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 02:38:50.562674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 02:38:50.562685 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:38:50.562696 | orchestrator | 2026-03-19 02:38:50.562707 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 02:38:50.562717 | orchestrator | Thursday 19 March 2026 02:38:48 +0000 (0:00:00.448) 0:00:26.666 ******** 2026-03-19 02:38:50.562728 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:38:50.562739 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:38:50.562750 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:38:50.562761 | orchestrator | 2026-03-19 02:38:50.562772 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 02:38:50.562783 | orchestrator | Thursday 19 March 2026 02:38:48 +0000 (0:00:00.344) 0:00:27.010 ******** 2026-03-19 02:38:50.562794 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 02:38:50.562805 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 02:38:50.562816 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 02:38:50.562826 | orchestrator | 2026-03-19 02:38:50.562837 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 02:38:50.562848 | orchestrator | Thursday 19 March 2026 02:38:49 +0000 (0:00:00.789) 0:00:27.800 ******** 2026-03-19 02:38:50.562859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:38:50.562903 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:40:32.131966 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:40:32.132109 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 02:40:32.132126 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 02:40:32.132138 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 02:40:32.132149 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 02:40:32.132160 | orchestrator | 2026-03-19 02:40:32.132172 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 02:40:32.132185 | orchestrator | Thursday 19 March 2026 02:38:50 +0000 (0:00:00.824) 0:00:28.624 ******** 2026-03-19 02:40:32.132197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 02:40:32.132208 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 02:40:32.132219 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 02:40:32.132259 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 02:40:32.132270 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 02:40:32.132282 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 02:40:32.132293 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 02:40:32.132305 | orchestrator | 2026-03-19 02:40:32.132317 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-19 02:40:32.132329 | orchestrator | Thursday 19 March 2026 02:38:52 +0000 (0:00:01.662) 0:00:30.287 ******** 2026-03-19 02:40:32.132342 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:40:32.132355 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:40:32.132368 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-19 02:40:32.132380 | orchestrator | 2026-03-19 02:40:32.132392 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-19 02:40:32.132404 | orchestrator | Thursday 19 March 2026 02:38:52 +0000 (0:00:00.393) 0:00:30.680 ******** 2026-03-19 02:40:32.132421 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:40:32.132436 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:40:32.132466 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:40:32.132519 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:40:32.132534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-19 02:40:32.132549 | orchestrator | 2026-03-19 02:40:32.132562 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-19 02:40:32.132576 | orchestrator | Thursday 19 March 2026 02:39:38 +0000 (0:00:45.705) 0:01:16.386 ******** 2026-03-19 02:40:32.132588 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132601 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132614 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132641 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132656 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132670 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-19 02:40:32.132682 | orchestrator | 2026-03-19 02:40:32.132696 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-19 02:40:32.132710 | orchestrator | Thursday 19 March 2026 02:40:02 +0000 (0:00:24.046) 0:01:40.432 ******** 2026-03-19 02:40:32.132757 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132771 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132781 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132791 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132801 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132811 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132823 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 02:40:32.132832 | orchestrator | 2026-03-19 02:40:32.132842 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-19 02:40:32.132853 | orchestrator | Thursday 19 March 2026 02:40:13 +0000 (0:00:11.631) 0:01:52.063 ******** 2026-03-19 02:40:32.132863 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132873 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.132883 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.132894 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132904 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.132914 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.132924 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132935 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.132945 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.132956 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132966 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.132976 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.132986 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.132997 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.133007 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.133019 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 02:40:32.133029 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 02:40:32.133040 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 02:40:32.133052 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-19 02:40:32.133064 | orchestrator | 2026-03-19 02:40:32.133084 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:40:32.133096 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-19 02:40:32.133108 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-19 02:40:32.133119 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 02:40:32.133128 | orchestrator | 2026-03-19 02:40:32.133137 | orchestrator | 2026-03-19 02:40:32.133146 | orchestrator | 2026-03-19 02:40:32.133155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:40:32.133175 | orchestrator | Thursday 19 March 2026 02:40:32 +0000 (0:00:18.113) 0:02:10.177 ******** 2026-03-19 02:40:32.133186 | orchestrator | =============================================================================== 2026-03-19 02:40:32.133197 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.71s 2026-03-19 02:40:32.133208 | orchestrator | generate keys ---------------------------------------------------------- 24.05s 2026-03-19 02:40:32.133218 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.11s 2026-03-19 02:40:32.133230 | orchestrator | get keys from monitors ------------------------------------------------- 11.63s 2026-03-19 02:40:32.133241 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.19s 2026-03-19 02:40:32.133251 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.84s 2026-03-19 02:40:32.133262 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.66s 2026-03-19 02:40:32.133273 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.14s 2026-03-19 02:40:32.133285 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.07s 2026-03-19 02:40:32.133297 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-03-19 02:40:32.133307 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.87s 2026-03-19 02:40:32.133318 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2026-03-19 02:40:32.133327 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.79s 2026-03-19 02:40:32.133345 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.79s 2026-03-19 02:40:32.459657 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-03-19 02:40:32.459753 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2026-03-19 02:40:32.459759 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-03-19 02:40:32.459764 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2026-03-19 02:40:32.459769 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-03-19 02:40:32.459773 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-03-19 02:40:34.807325 | orchestrator | 2026-03-19 02:40:34 | INFO  | Task e11b6bb7-2f97-45f6-a98b-a8af2bc6d3c8 (copy-ceph-keys) was prepared for execution. 2026-03-19 02:40:34.807426 | orchestrator | 2026-03-19 02:40:34 | INFO  | It takes a moment until task e11b6bb7-2f97-45f6-a98b-a8af2bc6d3c8 (copy-ceph-keys) has been started and output is visible here. 2026-03-19 02:41:10.288286 | orchestrator | 2026-03-19 02:41:10.288434 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-19 02:41:10.288462 | orchestrator | 2026-03-19 02:41:10.288539 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-19 02:41:10.288558 | orchestrator | Thursday 19 March 2026 02:40:38 +0000 (0:00:00.145) 0:00:00.145 ******** 2026-03-19 02:41:10.288569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-19 02:41:10.288580 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 02:41:10.288609 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288619 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-19 02:41:10.288628 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-19 02:41:10.288669 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-19 02:41:10.288679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-19 02:41:10.288688 | orchestrator | 2026-03-19 02:41:10.288698 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-19 02:41:10.288708 | orchestrator | Thursday 19 March 2026 02:40:43 +0000 (0:00:04.749) 0:00:04.894 ******** 2026-03-19 02:41:10.288733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-19 02:41:10.288743 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 02:41:10.288771 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-19 02:41:10.288789 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-19 02:41:10.288799 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-19 02:41:10.288808 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-19 02:41:10.288818 | orchestrator | 2026-03-19 02:41:10.288829 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-19 02:41:10.288840 | orchestrator | Thursday 19 March 2026 02:40:48 +0000 (0:00:04.361) 0:00:09.256 ******** 2026-03-19 02:41:10.288851 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 02:41:10.288862 | orchestrator | 2026-03-19 02:41:10.288873 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-19 02:41:10.288884 | orchestrator | Thursday 19 March 2026 02:40:48 +0000 (0:00:00.849) 0:00:10.106 ******** 2026-03-19 02:41:10.288894 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-19 02:41:10.288906 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288917 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288928 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 02:41:10.288938 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.288949 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-19 02:41:10.288959 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-19 02:41:10.288970 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-19 02:41:10.288981 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-19 02:41:10.288991 | orchestrator | 2026-03-19 02:41:10.289003 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-19 02:41:10.289013 | orchestrator | Thursday 19 March 2026 02:41:00 +0000 (0:00:11.915) 0:00:22.021 ******** 2026-03-19 02:41:10.289024 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-19 02:41:10.289035 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-19 02:41:10.289045 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-19 02:41:10.289056 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-19 02:41:10.289093 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-19 02:41:10.289105 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-19 02:41:10.289116 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-19 02:41:10.289126 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-19 02:41:10.289137 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-19 02:41:10.289148 | orchestrator | 2026-03-19 02:41:10.289160 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-19 02:41:10.289171 | orchestrator | Thursday 19 March 2026 02:41:03 +0000 (0:00:02.750) 0:00:24.772 ******** 2026-03-19 02:41:10.289182 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-19 02:41:10.289193 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.289204 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.289215 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 02:41:10.289226 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-19 02:41:10.289236 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-19 02:41:10.289245 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-19 02:41:10.289255 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-19 02:41:10.289264 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-19 02:41:10.289274 | orchestrator | 2026-03-19 02:41:10.289289 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:41:10.289299 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:41:10.289310 | orchestrator | 2026-03-19 02:41:10.289319 | orchestrator | 2026-03-19 02:41:10.289329 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:41:10.289338 | orchestrator | Thursday 19 March 2026 02:41:10 +0000 (0:00:06.493) 0:00:31.266 ******** 2026-03-19 02:41:10.289348 | orchestrator | =============================================================================== 2026-03-19 02:41:10.289357 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.92s 2026-03-19 02:41:10.289366 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.49s 2026-03-19 02:41:10.289376 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.75s 2026-03-19 02:41:10.289385 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.36s 2026-03-19 02:41:10.289395 | orchestrator | Check if target directories exist --------------------------------------- 2.75s 2026-03-19 02:41:10.289404 | orchestrator | Create share directory -------------------------------------------------- 0.85s 2026-03-19 02:41:22.351992 | orchestrator | 2026-03-19 02:41:22 | INFO  | Task bdb0a295-ec87-4dd0-884f-1dcde2052cfc (cephclient) was prepared for execution. 2026-03-19 02:41:22.352100 | orchestrator | 2026-03-19 02:41:22 | INFO  | It takes a moment until task bdb0a295-ec87-4dd0-884f-1dcde2052cfc (cephclient) has been started and output is visible here. 2026-03-19 02:42:21.270160 | orchestrator | 2026-03-19 02:42:21.270272 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-19 02:42:21.270281 | orchestrator | 2026-03-19 02:42:21.270289 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-19 02:42:21.270296 | orchestrator | Thursday 19 March 2026 02:41:26 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-03-19 02:42:21.270303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-19 02:42:21.270335 | orchestrator | 2026-03-19 02:42:21.270341 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-19 02:42:21.270347 | orchestrator | Thursday 19 March 2026 02:41:26 +0000 (0:00:00.246) 0:00:00.424 ******** 2026-03-19 02:42:21.270355 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-19 02:42:21.270361 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-19 02:42:21.270368 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-19 02:42:21.270374 | orchestrator | 2026-03-19 02:42:21.270380 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-19 02:42:21.270386 | orchestrator | Thursday 19 March 2026 02:41:27 +0000 (0:00:01.075) 0:00:01.500 ******** 2026-03-19 02:42:21.270393 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-19 02:42:21.270401 | orchestrator | 2026-03-19 02:42:21.270408 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-19 02:42:21.270414 | orchestrator | Thursday 19 March 2026 02:41:28 +0000 (0:00:01.231) 0:00:02.732 ******** 2026-03-19 02:42:21.270420 | orchestrator | changed: [testbed-manager] 2026-03-19 02:42:21.270426 | orchestrator | 2026-03-19 02:42:21.270431 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-19 02:42:21.270436 | orchestrator | Thursday 19 March 2026 02:41:29 +0000 (0:00:00.806) 0:00:03.538 ******** 2026-03-19 02:42:21.270442 | orchestrator | changed: [testbed-manager] 2026-03-19 02:42:21.270447 | orchestrator | 2026-03-19 02:42:21.270452 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-19 02:42:21.270458 | orchestrator | Thursday 19 March 2026 02:41:30 +0000 (0:00:00.807) 0:00:04.346 ******** 2026-03-19 02:42:21.270463 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-19 02:42:21.270469 | orchestrator | ok: [testbed-manager] 2026-03-19 02:42:21.270569 | orchestrator | 2026-03-19 02:42:21.270578 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-19 02:42:21.270584 | orchestrator | Thursday 19 March 2026 02:42:11 +0000 (0:00:41.431) 0:00:45.778 ******** 2026-03-19 02:42:21.270591 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-19 02:42:21.270598 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-19 02:42:21.270604 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-19 02:42:21.270610 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-19 02:42:21.270617 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-19 02:42:21.270623 | orchestrator | 2026-03-19 02:42:21.270629 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-19 02:42:21.270635 | orchestrator | Thursday 19 March 2026 02:42:15 +0000 (0:00:03.628) 0:00:49.407 ******** 2026-03-19 02:42:21.270641 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-19 02:42:21.270647 | orchestrator | 2026-03-19 02:42:21.270653 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-19 02:42:21.270658 | orchestrator | Thursday 19 March 2026 02:42:15 +0000 (0:00:00.423) 0:00:49.830 ******** 2026-03-19 02:42:21.270664 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:42:21.270670 | orchestrator | 2026-03-19 02:42:21.270676 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-19 02:42:21.270682 | orchestrator | Thursday 19 March 2026 02:42:15 +0000 (0:00:00.137) 0:00:49.968 ******** 2026-03-19 02:42:21.270688 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:42:21.270694 | orchestrator | 2026-03-19 02:42:21.270700 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-19 02:42:21.270722 | orchestrator | Thursday 19 March 2026 02:42:16 +0000 (0:00:00.551) 0:00:50.520 ******** 2026-03-19 02:42:21.270729 | orchestrator | changed: [testbed-manager] 2026-03-19 02:42:21.270747 | orchestrator | 2026-03-19 02:42:21.270753 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-19 02:42:21.270760 | orchestrator | Thursday 19 March 2026 02:42:18 +0000 (0:00:01.539) 0:00:52.060 ******** 2026-03-19 02:42:21.270766 | orchestrator | changed: [testbed-manager] 2026-03-19 02:42:21.270771 | orchestrator | 2026-03-19 02:42:21.270778 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-19 02:42:21.270784 | orchestrator | Thursday 19 March 2026 02:42:18 +0000 (0:00:00.710) 0:00:52.771 ******** 2026-03-19 02:42:21.270791 | orchestrator | changed: [testbed-manager] 2026-03-19 02:42:21.270797 | orchestrator | 2026-03-19 02:42:21.270804 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-19 02:42:21.270810 | orchestrator | Thursday 19 March 2026 02:42:19 +0000 (0:00:00.604) 0:00:53.375 ******** 2026-03-19 02:42:21.270816 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-19 02:42:21.270823 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-19 02:42:21.270828 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-19 02:42:21.270835 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-19 02:42:21.270841 | orchestrator | 2026-03-19 02:42:21.270847 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:42:21.270854 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 02:42:21.270862 | orchestrator | 2026-03-19 02:42:21.270868 | orchestrator | 2026-03-19 02:42:21.270896 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:42:21.270903 | orchestrator | Thursday 19 March 2026 02:42:20 +0000 (0:00:01.490) 0:00:54.866 ******** 2026-03-19 02:42:21.270909 | orchestrator | =============================================================================== 2026-03-19 02:42:21.270914 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.43s 2026-03-19 02:42:21.270919 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.63s 2026-03-19 02:42:21.270925 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2026-03-19 02:42:21.270931 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2026-03-19 02:42:21.270936 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.23s 2026-03-19 02:42:21.270942 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.08s 2026-03-19 02:42:21.270948 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2026-03-19 02:42:21.270953 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-03-19 02:42:21.270959 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-03-19 02:42:21.270965 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-03-19 02:42:21.270972 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-03-19 02:42:21.270978 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2026-03-19 02:42:21.270984 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-19 02:42:21.270991 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-19 02:42:23.589246 | orchestrator | 2026-03-19 02:42:23 | INFO  | Task bf24a18d-4422-4291-8c1c-8397791b1814 (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-19 02:42:23.589336 | orchestrator | 2026-03-19 02:42:23 | INFO  | It takes a moment until task bf24a18d-4422-4291-8c1c-8397791b1814 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-19 02:43:55.513309 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 02:43:55.513429 | orchestrator | 2.16.14 2026-03-19 02:43:55.513445 | orchestrator | 2026-03-19 02:43:55.513452 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-19 02:43:55.513564 | orchestrator | 2026-03-19 02:43:55.513574 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-19 02:43:55.513581 | orchestrator | Thursday 19 March 2026 02:42:27 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-03-19 02:43:55.513588 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513594 | orchestrator | 2026-03-19 02:43:55.513598 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-19 02:43:55.513602 | orchestrator | Thursday 19 March 2026 02:42:29 +0000 (0:00:01.915) 0:00:02.182 ******** 2026-03-19 02:43:55.513606 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513610 | orchestrator | 2026-03-19 02:43:55.513615 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-19 02:43:55.513619 | orchestrator | Thursday 19 March 2026 02:42:30 +0000 (0:00:01.016) 0:00:03.199 ******** 2026-03-19 02:43:55.513623 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513627 | orchestrator | 2026-03-19 02:43:55.513631 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-19 02:43:55.513635 | orchestrator | Thursday 19 March 2026 02:42:31 +0000 (0:00:00.963) 0:00:04.162 ******** 2026-03-19 02:43:55.513639 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513642 | orchestrator | 2026-03-19 02:43:55.513646 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-19 02:43:55.513650 | orchestrator | Thursday 19 March 2026 02:42:32 +0000 (0:00:01.037) 0:00:05.200 ******** 2026-03-19 02:43:55.513654 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513658 | orchestrator | 2026-03-19 02:43:55.513675 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-19 02:43:55.513680 | orchestrator | Thursday 19 March 2026 02:42:33 +0000 (0:00:00.982) 0:00:06.182 ******** 2026-03-19 02:43:55.513684 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513688 | orchestrator | 2026-03-19 02:43:55.513692 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-19 02:43:55.513696 | orchestrator | Thursday 19 March 2026 02:42:34 +0000 (0:00:00.976) 0:00:07.159 ******** 2026-03-19 02:43:55.513700 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513704 | orchestrator | 2026-03-19 02:43:55.513708 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-19 02:43:55.513711 | orchestrator | Thursday 19 March 2026 02:42:35 +0000 (0:00:01.081) 0:00:08.241 ******** 2026-03-19 02:43:55.513715 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513719 | orchestrator | 2026-03-19 02:43:55.513723 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-19 02:43:55.513727 | orchestrator | Thursday 19 March 2026 02:42:37 +0000 (0:00:01.101) 0:00:09.343 ******** 2026-03-19 02:43:55.513731 | orchestrator | changed: [testbed-manager] 2026-03-19 02:43:55.513735 | orchestrator | 2026-03-19 02:43:55.513739 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-19 02:43:55.513742 | orchestrator | Thursday 19 March 2026 02:43:30 +0000 (0:00:53.399) 0:01:02.742 ******** 2026-03-19 02:43:55.513746 | orchestrator | skipping: [testbed-manager] 2026-03-19 02:43:55.513750 | orchestrator | 2026-03-19 02:43:55.513754 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 02:43:55.513758 | orchestrator | 2026-03-19 02:43:55.513762 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 02:43:55.513766 | orchestrator | Thursday 19 March 2026 02:43:30 +0000 (0:00:00.157) 0:01:02.899 ******** 2026-03-19 02:43:55.513770 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:43:55.513774 | orchestrator | 2026-03-19 02:43:55.513778 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 02:43:55.513781 | orchestrator | 2026-03-19 02:43:55.513785 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 02:43:55.513789 | orchestrator | Thursday 19 March 2026 02:43:42 +0000 (0:00:11.723) 0:01:14.623 ******** 2026-03-19 02:43:55.513800 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:43:55.513804 | orchestrator | 2026-03-19 02:43:55.513808 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-19 02:43:55.513811 | orchestrator | 2026-03-19 02:43:55.513815 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-19 02:43:55.513820 | orchestrator | Thursday 19 March 2026 02:43:43 +0000 (0:00:01.176) 0:01:15.800 ******** 2026-03-19 02:43:55.513824 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:43:55.513828 | orchestrator | 2026-03-19 02:43:55.513832 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:43:55.513838 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 02:43:55.513845 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:43:55.513852 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:43:55.513859 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 02:43:55.513865 | orchestrator | 2026-03-19 02:43:55.513872 | orchestrator | 2026-03-19 02:43:55.513878 | orchestrator | 2026-03-19 02:43:55.513883 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:43:55.513887 | orchestrator | Thursday 19 March 2026 02:43:55 +0000 (0:00:11.572) 0:01:27.373 ******** 2026-03-19 02:43:55.513892 | orchestrator | =============================================================================== 2026-03-19 02:43:55.513896 | orchestrator | Create admin user ------------------------------------------------------ 53.40s 2026-03-19 02:43:55.513915 | orchestrator | Restart ceph manager service ------------------------------------------- 24.47s 2026-03-19 02:43:55.513919 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.92s 2026-03-19 02:43:55.513924 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.10s 2026-03-19 02:43:55.513928 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.08s 2026-03-19 02:43:55.513933 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2026-03-19 02:43:55.513938 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2026-03-19 02:43:55.513942 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.98s 2026-03-19 02:43:55.513947 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2026-03-19 02:43:55.513951 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2026-03-19 02:43:55.513956 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-03-19 02:43:55.830355 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-19 02:43:57.937321 | orchestrator | 2026-03-19 02:43:57 | INFO  | Task 99b22883-57b1-4830-b662-3338788aede3 (keystone) was prepared for execution. 2026-03-19 02:43:57.937424 | orchestrator | 2026-03-19 02:43:57 | INFO  | It takes a moment until task 99b22883-57b1-4830-b662-3338788aede3 (keystone) has been started and output is visible here. 2026-03-19 02:44:05.311691 | orchestrator | 2026-03-19 02:44:05.311800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:44:05.311813 | orchestrator | 2026-03-19 02:44:05.311838 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:44:05.311843 | orchestrator | Thursday 19 March 2026 02:44:02 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-03-19 02:44:05.311847 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:44:05.311851 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:44:05.311855 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:44:05.311859 | orchestrator | 2026-03-19 02:44:05.311863 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:44:05.311888 | orchestrator | Thursday 19 March 2026 02:44:02 +0000 (0:00:00.312) 0:00:00.572 ******** 2026-03-19 02:44:05.311892 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-19 02:44:05.311897 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-19 02:44:05.311900 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-19 02:44:05.311904 | orchestrator | 2026-03-19 02:44:05.311908 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-19 02:44:05.311912 | orchestrator | 2026-03-19 02:44:05.311915 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:44:05.311919 | orchestrator | Thursday 19 March 2026 02:44:02 +0000 (0:00:00.435) 0:00:01.007 ******** 2026-03-19 02:44:05.311923 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:44:05.311928 | orchestrator | 2026-03-19 02:44:05.311932 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-19 02:44:05.311936 | orchestrator | Thursday 19 March 2026 02:44:03 +0000 (0:00:00.566) 0:00:01.574 ******** 2026-03-19 02:44:05.311945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:05.311952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:05.311972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:05.311982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:05.311987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:05.311991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:05.311995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:05.311999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:05.312003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:05.312010 | orchestrator | 2026-03-19 02:44:05.312014 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-19 02:44:05.312021 | orchestrator | Thursday 19 March 2026 02:44:05 +0000 (0:00:01.875) 0:00:03.449 ******** 2026-03-19 02:44:11.144403 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:11.144600 | orchestrator | 2026-03-19 02:44:11.144635 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-19 02:44:11.144644 | orchestrator | Thursday 19 March 2026 02:44:05 +0000 (0:00:00.298) 0:00:03.748 ******** 2026-03-19 02:44:11.144651 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:11.144658 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:11.144664 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:11.144670 | orchestrator | 2026-03-19 02:44:11.144677 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-19 02:44:11.144683 | orchestrator | Thursday 19 March 2026 02:44:05 +0000 (0:00:00.313) 0:00:04.061 ******** 2026-03-19 02:44:11.144689 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:44:11.144695 | orchestrator | 2026-03-19 02:44:11.144701 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:44:11.144707 | orchestrator | Thursday 19 March 2026 02:44:06 +0000 (0:00:00.826) 0:00:04.888 ******** 2026-03-19 02:44:11.144723 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:44:11.144729 | orchestrator | 2026-03-19 02:44:11.144735 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-19 02:44:11.144741 | orchestrator | Thursday 19 March 2026 02:44:07 +0000 (0:00:00.490) 0:00:05.379 ******** 2026-03-19 02:44:11.144753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:11.144772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:11.144779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:11.144838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:11.144884 | orchestrator | 2026-03-19 02:44:11.144890 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-19 02:44:11.144896 | orchestrator | Thursday 19 March 2026 02:44:10 +0000 (0:00:03.407) 0:00:08.787 ******** 2026-03-19 02:44:11.144908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:11.888372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:11.888555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:11.888575 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:11.888587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:11.888619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:11.888633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:11.888642 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:11.888668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:11.888677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:11.888686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:11.888711 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:11.888720 | orchestrator | 2026-03-19 02:44:11.888737 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-19 02:44:11.888748 | orchestrator | Thursday 19 March 2026 02:44:11 +0000 (0:00:00.503) 0:00:09.290 ******** 2026-03-19 02:44:11.888756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:11.888769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:11.888785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:15.267212 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:15.267312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:15.267325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:15.267358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:15.267365 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:15.267386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:15.267394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:15.267415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:15.267422 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:15.267428 | orchestrator | 2026-03-19 02:44:15.267434 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-19 02:44:15.267442 | orchestrator | Thursday 19 March 2026 02:44:11 +0000 (0:00:00.742) 0:00:10.033 ******** 2026-03-19 02:44:15.267449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:15.267461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:15.267473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:15.267537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:19.627395 | orchestrator | 2026-03-19 02:44:19.627414 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-19 02:44:19.627433 | orchestrator | Thursday 19 March 2026 02:44:15 +0000 (0:00:03.378) 0:00:13.412 ******** 2026-03-19 02:44:19.627475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:19.627575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:19.627590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:19.627602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:19.627619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:19.627640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:22.793717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:22.793815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:22.793832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:44:22.793839 | orchestrator | 2026-03-19 02:44:22.793846 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-19 02:44:22.793854 | orchestrator | Thursday 19 March 2026 02:44:19 +0000 (0:00:04.359) 0:00:17.771 ******** 2026-03-19 02:44:22.793860 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:44:22.793867 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:44:22.793872 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:44:22.793878 | orchestrator | 2026-03-19 02:44:22.793884 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-19 02:44:22.793890 | orchestrator | Thursday 19 March 2026 02:44:20 +0000 (0:00:01.338) 0:00:19.110 ******** 2026-03-19 02:44:22.793896 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:22.793902 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:22.793908 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:22.793913 | orchestrator | 2026-03-19 02:44:22.793919 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-19 02:44:22.793925 | orchestrator | Thursday 19 March 2026 02:44:21 +0000 (0:00:00.634) 0:00:19.745 ******** 2026-03-19 02:44:22.793931 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:22.793959 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:22.793974 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:22.793984 | orchestrator | 2026-03-19 02:44:22.793994 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-19 02:44:22.794003 | orchestrator | Thursday 19 March 2026 02:44:21 +0000 (0:00:00.405) 0:00:20.150 ******** 2026-03-19 02:44:22.794056 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:22.794071 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:22.794080 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:22.794090 | orchestrator | 2026-03-19 02:44:22.794101 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-19 02:44:22.794111 | orchestrator | Thursday 19 March 2026 02:44:22 +0000 (0:00:00.275) 0:00:20.426 ******** 2026-03-19 02:44:22.794172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:22.794183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:22.794189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:22.794195 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:22.794202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:22.794214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:22.794230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:22.794236 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:22.794248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-19 02:44:40.927928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 02:44:40.928048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 02:44:40.928060 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:40.928070 | orchestrator | 2026-03-19 02:44:40.928078 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:44:40.928086 | orchestrator | Thursday 19 March 2026 02:44:22 +0000 (0:00:00.512) 0:00:20.938 ******** 2026-03-19 02:44:40.928092 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:40.928099 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:40.928106 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:40.928112 | orchestrator | 2026-03-19 02:44:40.928118 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-19 02:44:40.928125 | orchestrator | Thursday 19 March 2026 02:44:23 +0000 (0:00:00.265) 0:00:21.204 ******** 2026-03-19 02:44:40.928133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 02:44:40.928174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 02:44:40.928197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-19 02:44:40.928204 | orchestrator | 2026-03-19 02:44:40.928211 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-19 02:44:40.928217 | orchestrator | Thursday 19 March 2026 02:44:24 +0000 (0:00:01.712) 0:00:22.916 ******** 2026-03-19 02:44:40.928224 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:44:40.928231 | orchestrator | 2026-03-19 02:44:40.928237 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-19 02:44:40.928243 | orchestrator | Thursday 19 March 2026 02:44:25 +0000 (0:00:00.809) 0:00:23.725 ******** 2026-03-19 02:44:40.928250 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:44:40.928256 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:44:40.928262 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:44:40.928268 | orchestrator | 2026-03-19 02:44:40.928275 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-19 02:44:40.928282 | orchestrator | Thursday 19 March 2026 02:44:26 +0000 (0:00:00.507) 0:00:24.233 ******** 2026-03-19 02:44:40.928288 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:44:40.928295 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 02:44:40.928301 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 02:44:40.928308 | orchestrator | 2026-03-19 02:44:40.928315 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-19 02:44:40.928322 | orchestrator | Thursday 19 March 2026 02:44:27 +0000 (0:00:00.995) 0:00:25.228 ******** 2026-03-19 02:44:40.928329 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:44:40.928336 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:44:40.928343 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:44:40.928349 | orchestrator | 2026-03-19 02:44:40.928355 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-19 02:44:40.928362 | orchestrator | Thursday 19 March 2026 02:44:27 +0000 (0:00:00.471) 0:00:25.699 ******** 2026-03-19 02:44:40.928369 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 02:44:40.928375 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 02:44:40.928382 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-19 02:44:40.928388 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 02:44:40.928395 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 02:44:40.928402 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-19 02:44:40.928409 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 02:44:40.928415 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 02:44:40.928438 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-19 02:44:40.928445 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 02:44:40.928452 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 02:44:40.928459 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-19 02:44:40.928465 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 02:44:40.928473 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 02:44:40.928506 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-19 02:44:40.928521 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:44:40.928527 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:44:40.928534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:44:40.928542 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:44:40.928549 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:44:40.928556 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:44:40.928563 | orchestrator | 2026-03-19 02:44:40.928570 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-19 02:44:40.928577 | orchestrator | Thursday 19 March 2026 02:44:36 +0000 (0:00:08.531) 0:00:34.230 ******** 2026-03-19 02:44:40.928584 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:44:40.928591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:44:40.928598 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:44:40.928605 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:44:40.928612 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:44:40.928619 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:44:40.928626 | orchestrator | 2026-03-19 02:44:40.928638 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-19 02:44:40.928645 | orchestrator | Thursday 19 March 2026 02:44:38 +0000 (0:00:02.551) 0:00:36.781 ******** 2026-03-19 02:44:40.928654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:44:40.928669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:46:14.706910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-19 02:46:14.707025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-19 02:46:14.707140 | orchestrator | 2026-03-19 02:46:14.707149 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:46:14.707159 | orchestrator | Thursday 19 March 2026 02:44:40 +0000 (0:00:02.288) 0:00:39.070 ******** 2026-03-19 02:46:14.707167 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:46:14.707176 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:46:14.707183 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:46:14.707191 | orchestrator | 2026-03-19 02:46:14.707198 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-19 02:46:14.707206 | orchestrator | Thursday 19 March 2026 02:44:41 +0000 (0:00:00.381) 0:00:39.452 ******** 2026-03-19 02:46:14.707213 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707221 | orchestrator | 2026-03-19 02:46:14.707228 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-19 02:46:14.707235 | orchestrator | Thursday 19 March 2026 02:44:43 +0000 (0:00:02.329) 0:00:41.781 ******** 2026-03-19 02:46:14.707241 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707247 | orchestrator | 2026-03-19 02:46:14.707253 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-19 02:46:14.707260 | orchestrator | Thursday 19 March 2026 02:44:46 +0000 (0:00:02.517) 0:00:44.299 ******** 2026-03-19 02:46:14.707266 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:46:14.707273 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:46:14.707279 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:46:14.707285 | orchestrator | 2026-03-19 02:46:14.707292 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-19 02:46:14.707298 | orchestrator | Thursday 19 March 2026 02:44:46 +0000 (0:00:00.790) 0:00:45.090 ******** 2026-03-19 02:46:14.707307 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:46:14.707314 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:46:14.707321 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:46:14.707329 | orchestrator | 2026-03-19 02:46:14.707341 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-19 02:46:14.707350 | orchestrator | Thursday 19 March 2026 02:44:47 +0000 (0:00:00.301) 0:00:45.391 ******** 2026-03-19 02:46:14.707357 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:46:14.707364 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:46:14.707371 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:46:14.707379 | orchestrator | 2026-03-19 02:46:14.707387 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-19 02:46:14.707394 | orchestrator | Thursday 19 March 2026 02:44:47 +0000 (0:00:00.424) 0:00:45.816 ******** 2026-03-19 02:46:14.707401 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707409 | orchestrator | 2026-03-19 02:46:14.707416 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-19 02:46:14.707423 | orchestrator | Thursday 19 March 2026 02:45:02 +0000 (0:00:14.856) 0:01:00.672 ******** 2026-03-19 02:46:14.707431 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707438 | orchestrator | 2026-03-19 02:46:14.707445 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 02:46:14.707462 | orchestrator | Thursday 19 March 2026 02:45:13 +0000 (0:00:11.405) 0:01:12.078 ******** 2026-03-19 02:46:14.707470 | orchestrator | 2026-03-19 02:46:14.707502 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 02:46:14.707511 | orchestrator | Thursday 19 March 2026 02:45:13 +0000 (0:00:00.062) 0:01:12.140 ******** 2026-03-19 02:46:14.707519 | orchestrator | 2026-03-19 02:46:14.707528 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-19 02:46:14.707536 | orchestrator | Thursday 19 March 2026 02:45:14 +0000 (0:00:00.064) 0:01:12.205 ******** 2026-03-19 02:46:14.707544 | orchestrator | 2026-03-19 02:46:14.707553 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-19 02:46:14.707562 | orchestrator | Thursday 19 March 2026 02:45:14 +0000 (0:00:00.066) 0:01:12.271 ******** 2026-03-19 02:46:14.707570 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707579 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:46:14.707588 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:46:14.707597 | orchestrator | 2026-03-19 02:46:14.707605 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-19 02:46:14.707613 | orchestrator | Thursday 19 March 2026 02:45:58 +0000 (0:00:44.838) 0:01:57.110 ******** 2026-03-19 02:46:14.707622 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:46:14.707630 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:46:14.707638 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707646 | orchestrator | 2026-03-19 02:46:14.707655 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-19 02:46:14.707663 | orchestrator | Thursday 19 March 2026 02:46:06 +0000 (0:00:07.659) 0:02:04.770 ******** 2026-03-19 02:46:14.707671 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:46:14.707680 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:46:14.707689 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:46:14.707696 | orchestrator | 2026-03-19 02:46:14.707703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:46:14.707709 | orchestrator | Thursday 19 March 2026 02:46:14 +0000 (0:00:07.471) 0:02:12.241 ******** 2026-03-19 02:46:14.707724 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:47:10.117249 | orchestrator | 2026-03-19 02:47:10.117356 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-19 02:47:10.117368 | orchestrator | Thursday 19 March 2026 02:46:14 +0000 (0:00:00.610) 0:02:12.852 ******** 2026-03-19 02:47:10.117378 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:47:10.117386 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:47:10.117393 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:47:10.117401 | orchestrator | 2026-03-19 02:47:10.117408 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-19 02:47:10.117417 | orchestrator | Thursday 19 March 2026 02:46:15 +0000 (0:00:01.287) 0:02:14.140 ******** 2026-03-19 02:47:10.117424 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:47:10.117432 | orchestrator | 2026-03-19 02:47:10.117439 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-19 02:47:10.117447 | orchestrator | Thursday 19 March 2026 02:46:17 +0000 (0:00:01.831) 0:02:15.971 ******** 2026-03-19 02:47:10.117456 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-19 02:47:10.117464 | orchestrator | 2026-03-19 02:47:10.117517 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-19 02:47:10.117525 | orchestrator | Thursday 19 March 2026 02:46:30 +0000 (0:00:12.846) 0:02:28.818 ******** 2026-03-19 02:47:10.117533 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-19 02:47:10.117539 | orchestrator | 2026-03-19 02:47:10.117547 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-19 02:47:10.117555 | orchestrator | Thursday 19 March 2026 02:46:57 +0000 (0:00:26.830) 0:02:55.648 ******** 2026-03-19 02:47:10.117591 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-19 02:47:10.117601 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-19 02:47:10.117608 | orchestrator | 2026-03-19 02:47:10.117616 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-19 02:47:10.117624 | orchestrator | Thursday 19 March 2026 02:47:04 +0000 (0:00:07.376) 0:03:03.024 ******** 2026-03-19 02:47:10.117631 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:10.117638 | orchestrator | 2026-03-19 02:47:10.117646 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-19 02:47:10.117653 | orchestrator | Thursday 19 March 2026 02:47:05 +0000 (0:00:00.157) 0:03:03.182 ******** 2026-03-19 02:47:10.117659 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:10.117664 | orchestrator | 2026-03-19 02:47:10.117668 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-19 02:47:10.117687 | orchestrator | Thursday 19 March 2026 02:47:05 +0000 (0:00:00.138) 0:03:03.321 ******** 2026-03-19 02:47:10.117691 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:10.117696 | orchestrator | 2026-03-19 02:47:10.117700 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-19 02:47:10.117705 | orchestrator | Thursday 19 March 2026 02:47:05 +0000 (0:00:00.143) 0:03:03.464 ******** 2026-03-19 02:47:10.117709 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:10.117713 | orchestrator | 2026-03-19 02:47:10.117718 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-19 02:47:10.117722 | orchestrator | Thursday 19 March 2026 02:47:05 +0000 (0:00:00.571) 0:03:04.036 ******** 2026-03-19 02:47:10.117727 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:47:10.117731 | orchestrator | 2026-03-19 02:47:10.117736 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-19 02:47:10.117740 | orchestrator | Thursday 19 March 2026 02:47:09 +0000 (0:00:03.384) 0:03:07.420 ******** 2026-03-19 02:47:10.117744 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:10.117749 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:47:10.117753 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:47:10.117757 | orchestrator | 2026-03-19 02:47:10.117762 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:47:10.117767 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 02:47:10.117773 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 02:47:10.117777 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 02:47:10.117782 | orchestrator | 2026-03-19 02:47:10.117786 | orchestrator | 2026-03-19 02:47:10.117791 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:47:10.117795 | orchestrator | Thursday 19 March 2026 02:47:09 +0000 (0:00:00.448) 0:03:07.869 ******** 2026-03-19 02:47:10.117800 | orchestrator | =============================================================================== 2026-03-19 02:47:10.117805 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 44.84s 2026-03-19 02:47:10.117810 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.83s 2026-03-19 02:47:10.117815 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.86s 2026-03-19 02:47:10.117820 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.85s 2026-03-19 02:47:10.117826 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.41s 2026-03-19 02:47:10.117831 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.53s 2026-03-19 02:47:10.117835 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.66s 2026-03-19 02:47:10.117845 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.47s 2026-03-19 02:47:10.117850 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.38s 2026-03-19 02:47:10.117871 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.36s 2026-03-19 02:47:10.117876 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2026-03-19 02:47:10.117881 | orchestrator | keystone : Creating default user role ----------------------------------- 3.38s 2026-03-19 02:47:10.117886 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2026-03-19 02:47:10.117891 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.55s 2026-03-19 02:47:10.117896 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.52s 2026-03-19 02:47:10.117901 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.33s 2026-03-19 02:47:10.117906 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2026-03-19 02:47:10.117911 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.88s 2026-03-19 02:47:10.117916 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.83s 2026-03-19 02:47:10.117920 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.71s 2026-03-19 02:47:13.535871 | orchestrator | 2026-03-19 02:47:13 | INFO  | Task f168b79d-ec45-4940-9317-9c8e273d246e (placement) was prepared for execution. 2026-03-19 02:47:13.535970 | orchestrator | 2026-03-19 02:47:13 | INFO  | It takes a moment until task f168b79d-ec45-4940-9317-9c8e273d246e (placement) has been started and output is visible here. 2026-03-19 02:47:50.607167 | orchestrator | 2026-03-19 02:47:50.607281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:47:50.607297 | orchestrator | 2026-03-19 02:47:50.607308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:47:50.607320 | orchestrator | Thursday 19 March 2026 02:47:17 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-19 02:47:50.607331 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:47:50.607344 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:47:50.607354 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:47:50.607364 | orchestrator | 2026-03-19 02:47:50.607375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:47:50.607386 | orchestrator | Thursday 19 March 2026 02:47:17 +0000 (0:00:00.301) 0:00:00.557 ******** 2026-03-19 02:47:50.607397 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-19 02:47:50.607409 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-19 02:47:50.607441 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-19 02:47:50.607454 | orchestrator | 2026-03-19 02:47:50.607522 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-19 02:47:50.607532 | orchestrator | 2026-03-19 02:47:50.607542 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 02:47:50.607552 | orchestrator | Thursday 19 March 2026 02:47:18 +0000 (0:00:00.451) 0:00:01.008 ******** 2026-03-19 02:47:50.607562 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:47:50.607573 | orchestrator | 2026-03-19 02:47:50.607583 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-19 02:47:50.607592 | orchestrator | Thursday 19 March 2026 02:47:18 +0000 (0:00:00.542) 0:00:01.550 ******** 2026-03-19 02:47:50.607602 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-19 02:47:50.607612 | orchestrator | 2026-03-19 02:47:50.607621 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-19 02:47:50.607631 | orchestrator | Thursday 19 March 2026 02:47:23 +0000 (0:00:04.071) 0:00:05.622 ******** 2026-03-19 02:47:50.607668 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-19 02:47:50.607679 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-19 02:47:50.607688 | orchestrator | 2026-03-19 02:47:50.607697 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-19 02:47:50.607707 | orchestrator | Thursday 19 March 2026 02:47:30 +0000 (0:00:07.126) 0:00:12.748 ******** 2026-03-19 02:47:50.607717 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-19 02:47:50.607726 | orchestrator | 2026-03-19 02:47:50.607736 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-19 02:47:50.607746 | orchestrator | Thursday 19 March 2026 02:47:34 +0000 (0:00:03.908) 0:00:16.657 ******** 2026-03-19 02:47:50.607756 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 02:47:50.607766 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-19 02:47:50.607775 | orchestrator | 2026-03-19 02:47:50.607785 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-19 02:47:50.607807 | orchestrator | Thursday 19 March 2026 02:47:38 +0000 (0:00:04.488) 0:00:21.146 ******** 2026-03-19 02:47:50.607817 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 02:47:50.607827 | orchestrator | 2026-03-19 02:47:50.607836 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-19 02:47:50.607845 | orchestrator | Thursday 19 March 2026 02:47:42 +0000 (0:00:03.517) 0:00:24.664 ******** 2026-03-19 02:47:50.607855 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-19 02:47:50.607864 | orchestrator | 2026-03-19 02:47:50.607874 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 02:47:50.607884 | orchestrator | Thursday 19 March 2026 02:47:46 +0000 (0:00:04.311) 0:00:28.976 ******** 2026-03-19 02:47:50.607894 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:50.607903 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:47:50.607913 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:47:50.607923 | orchestrator | 2026-03-19 02:47:50.607932 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-19 02:47:50.607942 | orchestrator | Thursday 19 March 2026 02:47:46 +0000 (0:00:00.372) 0:00:29.348 ******** 2026-03-19 02:47:50.607957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:50.608013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:50.608035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:50.608060 | orchestrator | 2026-03-19 02:47:50.608070 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-19 02:47:50.608080 | orchestrator | Thursday 19 March 2026 02:47:47 +0000 (0:00:01.035) 0:00:30.384 ******** 2026-03-19 02:47:50.608090 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:50.608100 | orchestrator | 2026-03-19 02:47:50.608109 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-19 02:47:50.608118 | orchestrator | Thursday 19 March 2026 02:47:48 +0000 (0:00:00.339) 0:00:30.723 ******** 2026-03-19 02:47:50.608128 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:50.608138 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:47:50.608148 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:47:50.608158 | orchestrator | 2026-03-19 02:47:50.608168 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-19 02:47:50.608178 | orchestrator | Thursday 19 March 2026 02:47:48 +0000 (0:00:00.296) 0:00:31.020 ******** 2026-03-19 02:47:50.608187 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:47:50.608197 | orchestrator | 2026-03-19 02:47:50.608207 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-19 02:47:50.608232 | orchestrator | Thursday 19 March 2026 02:47:48 +0000 (0:00:00.510) 0:00:31.530 ******** 2026-03-19 02:47:50.608253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:50.608275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:53.471453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:53.471633 | orchestrator | 2026-03-19 02:47:53.471646 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-19 02:47:53.471662 | orchestrator | Thursday 19 March 2026 02:47:50 +0000 (0:00:01.662) 0:00:33.193 ******** 2026-03-19 02:47:53.471670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471678 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:53.471686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471692 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:47:53.471699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471726 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:47:53.471733 | orchestrator | 2026-03-19 02:47:53.471739 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-19 02:47:53.471761 | orchestrator | Thursday 19 March 2026 02:47:51 +0000 (0:00:00.499) 0:00:33.693 ******** 2026-03-19 02:47:53.471775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471782 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:47:53.471788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471795 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:47:53.471801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:47:53.471808 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:47:53.471814 | orchestrator | 2026-03-19 02:47:53.471820 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-19 02:47:53.471826 | orchestrator | Thursday 19 March 2026 02:47:51 +0000 (0:00:00.684) 0:00:34.377 ******** 2026-03-19 02:47:53.471833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:47:53.471862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:00.943519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:00.943627 | orchestrator | 2026-03-19 02:48:00.943637 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-19 02:48:00.943645 | orchestrator | Thursday 19 March 2026 02:47:53 +0000 (0:00:01.685) 0:00:36.063 ******** 2026-03-19 02:48:00.943653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:00.943660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:00.943714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:00.943722 | orchestrator | 2026-03-19 02:48:00.943729 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-19 02:48:00.943735 | orchestrator | Thursday 19 March 2026 02:47:55 +0000 (0:00:02.511) 0:00:38.574 ******** 2026-03-19 02:48:00.943756 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 02:48:00.943765 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 02:48:00.943771 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-19 02:48:00.943777 | orchestrator | 2026-03-19 02:48:00.943783 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-19 02:48:00.943789 | orchestrator | Thursday 19 March 2026 02:47:57 +0000 (0:00:01.534) 0:00:40.108 ******** 2026-03-19 02:48:00.943795 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:48:00.943803 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:48:00.943809 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:48:00.943815 | orchestrator | 2026-03-19 02:48:00.943821 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-19 02:48:00.943827 | orchestrator | Thursday 19 March 2026 02:47:59 +0000 (0:00:01.513) 0:00:41.622 ******** 2026-03-19 02:48:00.943833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:48:00.943847 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:48:00.943854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:48:00.943861 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:48:00.943867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-19 02:48:00.943873 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:48:00.943879 | orchestrator | 2026-03-19 02:48:00.943889 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-19 02:48:00.943896 | orchestrator | Thursday 19 March 2026 02:47:59 +0000 (0:00:00.767) 0:00:42.389 ******** 2026-03-19 02:48:00.943908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:30.721742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:30.721874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-19 02:48:30.721885 | orchestrator | 2026-03-19 02:48:30.721892 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-19 02:48:30.721897 | orchestrator | Thursday 19 March 2026 02:48:00 +0000 (0:00:01.147) 0:00:43.537 ******** 2026-03-19 02:48:30.721902 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:48:30.721907 | orchestrator | 2026-03-19 02:48:30.721912 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-19 02:48:30.721916 | orchestrator | Thursday 19 March 2026 02:48:03 +0000 (0:00:02.300) 0:00:45.837 ******** 2026-03-19 02:48:30.721921 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:48:30.721925 | orchestrator | 2026-03-19 02:48:30.721933 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-19 02:48:30.721942 | orchestrator | Thursday 19 March 2026 02:48:05 +0000 (0:00:02.338) 0:00:48.176 ******** 2026-03-19 02:48:30.721952 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:48:30.721959 | orchestrator | 2026-03-19 02:48:30.721965 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 02:48:30.721973 | orchestrator | Thursday 19 March 2026 02:48:20 +0000 (0:00:14.558) 0:01:02.734 ******** 2026-03-19 02:48:30.721980 | orchestrator | 2026-03-19 02:48:30.721988 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 02:48:30.721995 | orchestrator | Thursday 19 March 2026 02:48:20 +0000 (0:00:00.071) 0:01:02.806 ******** 2026-03-19 02:48:30.722002 | orchestrator | 2026-03-19 02:48:30.722008 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-19 02:48:30.722050 | orchestrator | Thursday 19 March 2026 02:48:20 +0000 (0:00:00.070) 0:01:02.876 ******** 2026-03-19 02:48:30.722054 | orchestrator | 2026-03-19 02:48:30.722059 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-19 02:48:30.722075 | orchestrator | Thursday 19 March 2026 02:48:20 +0000 (0:00:00.071) 0:01:02.947 ******** 2026-03-19 02:48:30.722080 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:48:30.722084 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:48:30.722089 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:48:30.722096 | orchestrator | 2026-03-19 02:48:30.722103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:48:30.722119 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 02:48:30.722127 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:48:30.722133 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 02:48:30.722139 | orchestrator | 2026-03-19 02:48:30.722145 | orchestrator | 2026-03-19 02:48:30.722151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:48:30.722166 | orchestrator | Thursday 19 March 2026 02:48:30 +0000 (0:00:09.997) 0:01:12.945 ******** 2026-03-19 02:48:30.722172 | orchestrator | =============================================================================== 2026-03-19 02:48:30.722178 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.56s 2026-03-19 02:48:30.722199 | orchestrator | placement : Restart placement-api container ---------------------------- 10.00s 2026-03-19 02:48:30.722206 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.13s 2026-03-19 02:48:30.722213 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.49s 2026-03-19 02:48:30.722219 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.31s 2026-03-19 02:48:30.722226 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.07s 2026-03-19 02:48:30.722232 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.91s 2026-03-19 02:48:30.722239 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.52s 2026-03-19 02:48:30.722245 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.51s 2026-03-19 02:48:30.722252 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2026-03-19 02:48:30.722259 | orchestrator | placement : Creating placement databases -------------------------------- 2.30s 2026-03-19 02:48:30.722266 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-03-19 02:48:30.722272 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.66s 2026-03-19 02:48:30.722279 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2026-03-19 02:48:30.722286 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-03-19 02:48:30.722293 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-03-19 02:48:30.722300 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.04s 2026-03-19 02:48:30.722308 | orchestrator | placement : Copying over existing policy file --------------------------- 0.77s 2026-03-19 02:48:30.722315 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2026-03-19 02:48:30.722322 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2026-03-19 02:48:33.664262 | orchestrator | 2026-03-19 02:48:33 | INFO  | Task 750f956f-d0a2-489f-904a-1882f6edad25 (neutron) was prepared for execution. 2026-03-19 02:48:33.664347 | orchestrator | 2026-03-19 02:48:33 | INFO  | It takes a moment until task 750f956f-d0a2-489f-904a-1882f6edad25 (neutron) has been started and output is visible here. 2026-03-19 02:49:24.241773 | orchestrator | 2026-03-19 02:49:24.241870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:49:24.241881 | orchestrator | 2026-03-19 02:49:24.241888 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:49:24.241895 | orchestrator | Thursday 19 March 2026 02:48:38 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-03-19 02:49:24.241902 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:49:24.241910 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:49:24.241916 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:49:24.241922 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:49:24.241929 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:49:24.241935 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:49:24.241941 | orchestrator | 2026-03-19 02:49:24.241947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:49:24.241954 | orchestrator | Thursday 19 March 2026 02:48:39 +0000 (0:00:00.704) 0:00:00.989 ******** 2026-03-19 02:49:24.241960 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-19 02:49:24.241966 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-19 02:49:24.241973 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-19 02:49:24.241979 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-19 02:49:24.242008 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-19 02:49:24.242059 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-19 02:49:24.242067 | orchestrator | 2026-03-19 02:49:24.242073 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-19 02:49:24.242079 | orchestrator | 2026-03-19 02:49:24.242085 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 02:49:24.242092 | orchestrator | Thursday 19 March 2026 02:48:39 +0000 (0:00:00.600) 0:00:01.590 ******** 2026-03-19 02:49:24.242112 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:49:24.242120 | orchestrator | 2026-03-19 02:49:24.242126 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-19 02:49:24.242132 | orchestrator | Thursday 19 March 2026 02:48:40 +0000 (0:00:01.194) 0:00:02.784 ******** 2026-03-19 02:49:24.242139 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:49:24.242145 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:49:24.242151 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:49:24.242157 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:49:24.242163 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:49:24.242170 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:49:24.242176 | orchestrator | 2026-03-19 02:49:24.242182 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-19 02:49:24.242188 | orchestrator | Thursday 19 March 2026 02:48:42 +0000 (0:00:01.247) 0:00:04.032 ******** 2026-03-19 02:49:24.242194 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:49:24.242200 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:49:24.242207 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:49:24.242213 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:49:24.242219 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:49:24.242225 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:49:24.242231 | orchestrator | 2026-03-19 02:49:24.242237 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-19 02:49:24.242243 | orchestrator | Thursday 19 March 2026 02:48:43 +0000 (0:00:01.036) 0:00:05.068 ******** 2026-03-19 02:49:24.242250 | orchestrator | ok: [testbed-node-0] => { 2026-03-19 02:49:24.242257 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242263 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242269 | orchestrator | } 2026-03-19 02:49:24.242276 | orchestrator | ok: [testbed-node-1] => { 2026-03-19 02:49:24.242282 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242288 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242294 | orchestrator | } 2026-03-19 02:49:24.242300 | orchestrator | ok: [testbed-node-2] => { 2026-03-19 02:49:24.242306 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242312 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242319 | orchestrator | } 2026-03-19 02:49:24.242325 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 02:49:24.242332 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242339 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242346 | orchestrator | } 2026-03-19 02:49:24.242353 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 02:49:24.242361 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242368 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242375 | orchestrator | } 2026-03-19 02:49:24.242382 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 02:49:24.242389 | orchestrator |  "changed": false, 2026-03-19 02:49:24.242396 | orchestrator |  "msg": "All assertions passed" 2026-03-19 02:49:24.242403 | orchestrator | } 2026-03-19 02:49:24.242410 | orchestrator | 2026-03-19 02:49:24.242417 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-19 02:49:24.242447 | orchestrator | Thursday 19 March 2026 02:48:43 +0000 (0:00:00.776) 0:00:05.845 ******** 2026-03-19 02:49:24.242455 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:24.242468 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:24.242475 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:24.242483 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:24.242490 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:24.242497 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:24.242504 | orchestrator | 2026-03-19 02:49:24.242512 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-19 02:49:24.242519 | orchestrator | Thursday 19 March 2026 02:48:44 +0000 (0:00:00.600) 0:00:06.445 ******** 2026-03-19 02:49:24.242525 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-19 02:49:24.242531 | orchestrator | 2026-03-19 02:49:24.242538 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-19 02:49:24.242544 | orchestrator | Thursday 19 March 2026 02:48:48 +0000 (0:00:04.124) 0:00:10.570 ******** 2026-03-19 02:49:24.242550 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-19 02:49:24.242557 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-19 02:49:24.242563 | orchestrator | 2026-03-19 02:49:24.242583 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-19 02:49:24.242590 | orchestrator | Thursday 19 March 2026 02:48:55 +0000 (0:00:07.086) 0:00:17.657 ******** 2026-03-19 02:49:24.242596 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 02:49:24.242603 | orchestrator | 2026-03-19 02:49:24.242609 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-19 02:49:24.242615 | orchestrator | Thursday 19 March 2026 02:48:59 +0000 (0:00:03.570) 0:00:21.227 ******** 2026-03-19 02:49:24.242621 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 02:49:24.242627 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-19 02:49:24.242634 | orchestrator | 2026-03-19 02:49:24.242640 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-19 02:49:24.242646 | orchestrator | Thursday 19 March 2026 02:49:03 +0000 (0:00:04.294) 0:00:25.522 ******** 2026-03-19 02:49:24.242652 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 02:49:24.242659 | orchestrator | 2026-03-19 02:49:24.242665 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-19 02:49:24.242671 | orchestrator | Thursday 19 March 2026 02:49:07 +0000 (0:00:03.435) 0:00:28.958 ******** 2026-03-19 02:49:24.242677 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-19 02:49:24.242683 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-19 02:49:24.242689 | orchestrator | 2026-03-19 02:49:24.242695 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 02:49:24.242702 | orchestrator | Thursday 19 March 2026 02:49:15 +0000 (0:00:08.654) 0:00:37.613 ******** 2026-03-19 02:49:24.242708 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:24.242714 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:24.242724 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:24.242730 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:24.242736 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:24.242743 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:24.242749 | orchestrator | 2026-03-19 02:49:24.242755 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-19 02:49:24.242761 | orchestrator | Thursday 19 March 2026 02:49:16 +0000 (0:00:00.793) 0:00:38.407 ******** 2026-03-19 02:49:24.242768 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:24.242774 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:24.242780 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:24.242786 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:24.242792 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:24.242798 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:24.242804 | orchestrator | 2026-03-19 02:49:24.242815 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-19 02:49:24.242822 | orchestrator | Thursday 19 March 2026 02:49:18 +0000 (0:00:02.015) 0:00:40.422 ******** 2026-03-19 02:49:24.242828 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:49:24.242834 | orchestrator | ok: [testbed-node-1] 2026-03-19 02:49:24.242840 | orchestrator | ok: [testbed-node-2] 2026-03-19 02:49:24.242846 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:49:24.242852 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:49:24.242859 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:49:24.242865 | orchestrator | 2026-03-19 02:49:24.242871 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-19 02:49:24.242877 | orchestrator | Thursday 19 March 2026 02:49:19 +0000 (0:00:01.141) 0:00:41.564 ******** 2026-03-19 02:49:24.242883 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:24.242890 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:24.242896 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:24.242902 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:24.242908 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:24.242914 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:24.242920 | orchestrator | 2026-03-19 02:49:24.242926 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-19 02:49:24.242933 | orchestrator | Thursday 19 March 2026 02:49:21 +0000 (0:00:02.117) 0:00:43.681 ******** 2026-03-19 02:49:24.242942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:24.242958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:29.712000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:29.712141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:29.712163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:29.712187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:29.712204 | orchestrator | 2026-03-19 02:49:29.712221 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-19 02:49:29.712238 | orchestrator | Thursday 19 March 2026 02:49:24 +0000 (0:00:02.474) 0:00:46.156 ******** 2026-03-19 02:49:29.712273 | orchestrator | [WARNING]: Skipped 2026-03-19 02:49:29.712293 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-19 02:49:29.712303 | orchestrator | due to this access issue: 2026-03-19 02:49:29.712313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-19 02:49:29.712322 | orchestrator | a directory 2026-03-19 02:49:29.712331 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:49:29.712340 | orchestrator | 2026-03-19 02:49:29.712348 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 02:49:29.712357 | orchestrator | Thursday 19 March 2026 02:49:25 +0000 (0:00:00.820) 0:00:46.977 ******** 2026-03-19 02:49:29.712367 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:49:29.712377 | orchestrator | 2026-03-19 02:49:29.712386 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-19 02:49:29.712437 | orchestrator | Thursday 19 March 2026 02:49:26 +0000 (0:00:01.263) 0:00:48.240 ******** 2026-03-19 02:49:29.712457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:29.712477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:29.712488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:29.712499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:29.712517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:34.432835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:34.432941 | orchestrator | 2026-03-19 02:49:34.432952 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-19 02:49:34.432960 | orchestrator | Thursday 19 March 2026 02:49:29 +0000 (0:00:03.385) 0:00:51.626 ******** 2026-03-19 02:49:34.432969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:34.432977 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:34.432985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:34.432992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:34.433023 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:34.433030 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:34.433052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:34.433058 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:34.433069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:34.433075 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:34.433081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:34.433087 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:34.433093 | orchestrator | 2026-03-19 02:49:34.433100 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-19 02:49:34.433106 | orchestrator | Thursday 19 March 2026 02:49:31 +0000 (0:00:01.956) 0:00:53.582 ******** 2026-03-19 02:49:34.433112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:34.433119 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:34.433129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:39.631494 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:39.631637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:39.631661 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:39.631675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:39.631688 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:39.631700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:39.631712 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:39.631723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:39.631761 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:39.631773 | orchestrator | 2026-03-19 02:49:39.631786 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-19 02:49:39.631797 | orchestrator | Thursday 19 March 2026 02:49:34 +0000 (0:00:02.758) 0:00:56.341 ******** 2026-03-19 02:49:39.631808 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:39.631819 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:39.631830 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:39.631840 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:39.631851 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:39.631862 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:39.631872 | orchestrator | 2026-03-19 02:49:39.631883 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-19 02:49:39.631894 | orchestrator | Thursday 19 March 2026 02:49:36 +0000 (0:00:02.312) 0:00:58.654 ******** 2026-03-19 02:49:39.631905 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:39.631916 | orchestrator | 2026-03-19 02:49:39.631927 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-19 02:49:39.631954 | orchestrator | Thursday 19 March 2026 02:49:36 +0000 (0:00:00.140) 0:00:58.794 ******** 2026-03-19 02:49:39.631966 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:39.631977 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:39.631990 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:39.632021 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:39.632061 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:39.632083 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:39.632102 | orchestrator | 2026-03-19 02:49:39.632120 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-19 02:49:39.632138 | orchestrator | Thursday 19 March 2026 02:49:37 +0000 (0:00:00.588) 0:00:59.383 ******** 2026-03-19 02:49:39.632165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:39.632184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:39.632203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:39.632235 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:39.632254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:39.632273 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:39.632292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:39.632312 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:39.632355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:48.087636 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:48.087725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:48.087735 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:48.087740 | orchestrator | 2026-03-19 02:49:48.087746 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-19 02:49:48.087752 | orchestrator | Thursday 19 March 2026 02:49:39 +0000 (0:00:02.157) 0:01:01.541 ******** 2026-03-19 02:49:48.087758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:48.087785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:48.087791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:48.087823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:48.087828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:48.087837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:48.087842 | orchestrator | 2026-03-19 02:49:48.087847 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-19 02:49:48.087852 | orchestrator | Thursday 19 March 2026 02:49:42 +0000 (0:00:03.073) 0:01:04.614 ******** 2026-03-19 02:49:48.087857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:48.087861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:48.087874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:52.777663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:49:52.777843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:52.777863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:49:52.777877 | orchestrator | 2026-03-19 02:49:52.777890 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-19 02:49:52.777903 | orchestrator | Thursday 19 March 2026 02:49:48 +0000 (0:00:05.385) 0:01:09.999 ******** 2026-03-19 02:49:52.777931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:52.777944 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:49:52.777984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:52.778082 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:49:52.778109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:52.778128 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:52.778147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:49:52.778167 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:49:52.778189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:52.778211 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:52.778243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:49:52.778290 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:52.778313 | orchestrator | 2026-03-19 02:49:52.778354 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-19 02:49:52.778375 | orchestrator | Thursday 19 March 2026 02:49:50 +0000 (0:00:02.117) 0:01:12.117 ******** 2026-03-19 02:49:52.778390 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:49:52.778403 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:49:52.778415 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:49:52.778428 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:49:52.778440 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:49:52.778463 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:50:11.495833 | orchestrator | 2026-03-19 02:50:11.495955 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-19 02:50:11.495974 | orchestrator | Thursday 19 March 2026 02:49:52 +0000 (0:00:02.574) 0:01:14.691 ******** 2026-03-19 02:50:11.495986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:11.496000 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:11.496023 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:11.496044 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:11.496135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:11.496143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:11.496149 | orchestrator | 2026-03-19 02:50:11.496155 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-19 02:50:11.496161 | orchestrator | Thursday 19 March 2026 02:49:56 +0000 (0:00:03.304) 0:01:17.995 ******** 2026-03-19 02:50:11.496167 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496173 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496179 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496185 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496191 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496197 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496202 | orchestrator | 2026-03-19 02:50:11.496208 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-19 02:50:11.496214 | orchestrator | Thursday 19 March 2026 02:49:58 +0000 (0:00:02.286) 0:01:20.282 ******** 2026-03-19 02:50:11.496220 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496226 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496231 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496237 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496243 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496249 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496254 | orchestrator | 2026-03-19 02:50:11.496327 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-19 02:50:11.496334 | orchestrator | Thursday 19 March 2026 02:50:00 +0000 (0:00:02.250) 0:01:22.533 ******** 2026-03-19 02:50:11.496340 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496346 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496352 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496357 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496363 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496370 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496384 | orchestrator | 2026-03-19 02:50:11.496392 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-19 02:50:11.496398 | orchestrator | Thursday 19 March 2026 02:50:02 +0000 (0:00:02.334) 0:01:24.867 ******** 2026-03-19 02:50:11.496405 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496412 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496418 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496424 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496430 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496437 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496443 | orchestrator | 2026-03-19 02:50:11.496450 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-19 02:50:11.496456 | orchestrator | Thursday 19 March 2026 02:50:05 +0000 (0:00:02.132) 0:01:27.000 ******** 2026-03-19 02:50:11.496463 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496469 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496475 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496482 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496488 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496495 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496501 | orchestrator | 2026-03-19 02:50:11.496509 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-19 02:50:11.496516 | orchestrator | Thursday 19 March 2026 02:50:07 +0000 (0:00:02.162) 0:01:29.163 ******** 2026-03-19 02:50:11.496527 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496543 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:11.496554 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496564 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:11.496575 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:11.496584 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:11.496595 | orchestrator | 2026-03-19 02:50:11.496606 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-19 02:50:11.496618 | orchestrator | Thursday 19 March 2026 02:50:09 +0000 (0:00:02.098) 0:01:31.261 ******** 2026-03-19 02:50:11.496628 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:11.496639 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:11.496650 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:11.496660 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:11.496671 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:11.496689 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:15.615423 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:15.615536 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:15.615552 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:15.615560 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:15.615567 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-19 02:50:15.615574 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:15.615581 | orchestrator | 2026-03-19 02:50:15.615589 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-19 02:50:15.615596 | orchestrator | Thursday 19 March 2026 02:50:11 +0000 (0:00:02.140) 0:01:33.402 ******** 2026-03-19 02:50:15.615605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:15.615643 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:15.615649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:15.615653 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:15.615670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:15.615674 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:15.615692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:15.615697 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:15.615701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:15.615708 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:15.615713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:15.615716 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:15.615720 | orchestrator | 2026-03-19 02:50:15.615724 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-19 02:50:15.615728 | orchestrator | Thursday 19 March 2026 02:50:13 +0000 (0:00:02.042) 0:01:35.445 ******** 2026-03-19 02:50:15.615732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:15.615736 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:15.615743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:15.615747 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:15.615755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:41.322250 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.322395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:41.322418 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.322431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:41.322443 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.322456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:41.322467 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.322478 | orchestrator | 2026-03-19 02:50:41.322491 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-19 02:50:41.322504 | orchestrator | Thursday 19 March 2026 02:50:15 +0000 (0:00:02.082) 0:01:37.527 ******** 2026-03-19 02:50:41.322514 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.322525 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.322536 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.322548 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.322580 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.322591 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.322602 | orchestrator | 2026-03-19 02:50:41.322613 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-19 02:50:41.322623 | orchestrator | Thursday 19 March 2026 02:50:17 +0000 (0:00:02.117) 0:01:39.645 ******** 2026-03-19 02:50:41.322634 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.322646 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.322656 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.322666 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:50:41.322677 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:50:41.322687 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:50:41.322727 | orchestrator | 2026-03-19 02:50:41.322741 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-19 02:50:41.322754 | orchestrator | Thursday 19 March 2026 02:50:21 +0000 (0:00:03.645) 0:01:43.290 ******** 2026-03-19 02:50:41.322766 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.322778 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.322789 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.322801 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.322813 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.322825 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.322837 | orchestrator | 2026-03-19 02:50:41.322848 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-19 02:50:41.322860 | orchestrator | Thursday 19 March 2026 02:50:23 +0000 (0:00:02.054) 0:01:45.345 ******** 2026-03-19 02:50:41.322871 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.322884 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.322895 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.322908 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.322920 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.322930 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.322942 | orchestrator | 2026-03-19 02:50:41.322954 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-19 02:50:41.322991 | orchestrator | Thursday 19 March 2026 02:50:25 +0000 (0:00:02.257) 0:01:47.602 ******** 2026-03-19 02:50:41.323003 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323014 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323026 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323038 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323049 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323061 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323074 | orchestrator | 2026-03-19 02:50:41.323087 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-19 02:50:41.323100 | orchestrator | Thursday 19 March 2026 02:50:28 +0000 (0:00:02.416) 0:01:50.019 ******** 2026-03-19 02:50:41.323114 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323127 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323139 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323151 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323189 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323203 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323214 | orchestrator | 2026-03-19 02:50:41.323226 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-19 02:50:41.323237 | orchestrator | Thursday 19 March 2026 02:50:30 +0000 (0:00:02.093) 0:01:52.113 ******** 2026-03-19 02:50:41.323249 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323260 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323271 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323283 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323295 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323306 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323317 | orchestrator | 2026-03-19 02:50:41.323328 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-19 02:50:41.323339 | orchestrator | Thursday 19 March 2026 02:50:32 +0000 (0:00:02.182) 0:01:54.295 ******** 2026-03-19 02:50:41.323350 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323362 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323374 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323386 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323397 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323409 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323421 | orchestrator | 2026-03-19 02:50:41.323433 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-19 02:50:41.323445 | orchestrator | Thursday 19 March 2026 02:50:34 +0000 (0:00:02.167) 0:01:56.463 ******** 2026-03-19 02:50:41.323469 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323491 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323502 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323513 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323525 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323538 | orchestrator | 2026-03-19 02:50:41.323551 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-19 02:50:41.323563 | orchestrator | Thursday 19 March 2026 02:50:37 +0000 (0:00:02.467) 0:01:58.930 ******** 2026-03-19 02:50:41.323575 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323588 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:41.323601 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323614 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:41.323627 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323640 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323652 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323664 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:41.323676 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323688 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:41.323711 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-19 02:50:41.323723 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:41.323733 | orchestrator | 2026-03-19 02:50:41.323742 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-19 02:50:41.323752 | orchestrator | Thursday 19 March 2026 02:50:38 +0000 (0:00:01.889) 0:02:00.820 ******** 2026-03-19 02:50:41.323765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:41.323777 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:50:41.323811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:43.948421 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:50:43.948548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-19 02:50:43.948578 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:50:43.948595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:43.948610 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:50:43.948649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:43.948664 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:50:43.948681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 02:50:43.948696 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:50:43.948712 | orchestrator | 2026-03-19 02:50:43.948729 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-19 02:50:43.948747 | orchestrator | Thursday 19 March 2026 02:50:41 +0000 (0:00:02.411) 0:02:03.231 ******** 2026-03-19 02:50:43.948789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:50:43.948846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:43.948873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:43.948890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-19 02:50:43.948908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:50:43.948938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-19 02:52:56.700963 | orchestrator | 2026-03-19 02:52:56.701077 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-19 02:52:56.701088 | orchestrator | Thursday 19 March 2026 02:50:43 +0000 (0:00:02.630) 0:02:05.862 ******** 2026-03-19 02:52:56.701095 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:52:56.701103 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:52:56.701110 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:52:56.701117 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:52:56.701123 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:52:56.701130 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:52:56.701137 | orchestrator | 2026-03-19 02:52:56.701143 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-19 02:52:56.701150 | orchestrator | Thursday 19 March 2026 02:50:44 +0000 (0:00:00.748) 0:02:06.611 ******** 2026-03-19 02:52:56.701157 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:52:56.701163 | orchestrator | 2026-03-19 02:52:56.701170 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-19 02:52:56.701177 | orchestrator | Thursday 19 March 2026 02:50:46 +0000 (0:00:02.283) 0:02:08.894 ******** 2026-03-19 02:52:56.701184 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:52:56.701191 | orchestrator | 2026-03-19 02:52:56.701198 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-19 02:52:56.701205 | orchestrator | Thursday 19 March 2026 02:50:49 +0000 (0:00:02.374) 0:02:11.269 ******** 2026-03-19 02:52:56.701212 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:52:56.701219 | orchestrator | 2026-03-19 02:52:56.701226 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701233 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:43.021) 0:02:54.290 ******** 2026-03-19 02:52:56.701241 | orchestrator | 2026-03-19 02:52:56.701248 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701254 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.069) 0:02:54.360 ******** 2026-03-19 02:52:56.701260 | orchestrator | 2026-03-19 02:52:56.701267 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701273 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.082) 0:02:54.443 ******** 2026-03-19 02:52:56.701280 | orchestrator | 2026-03-19 02:52:56.701287 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701312 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.068) 0:02:54.511 ******** 2026-03-19 02:52:56.701328 | orchestrator | 2026-03-19 02:52:56.701337 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701343 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.068) 0:02:54.580 ******** 2026-03-19 02:52:56.701350 | orchestrator | 2026-03-19 02:52:56.701357 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-19 02:52:56.701363 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.070) 0:02:54.651 ******** 2026-03-19 02:52:56.701370 | orchestrator | 2026-03-19 02:52:56.701376 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-19 02:52:56.701409 | orchestrator | Thursday 19 March 2026 02:51:32 +0000 (0:00:00.071) 0:02:54.722 ******** 2026-03-19 02:52:56.701415 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:52:56.701421 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:52:56.701428 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:52:56.701434 | orchestrator | 2026-03-19 02:52:56.701440 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-19 02:52:56.701447 | orchestrator | Thursday 19 March 2026 02:52:00 +0000 (0:00:27.903) 0:03:22.625 ******** 2026-03-19 02:52:56.701453 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:52:56.701459 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:52:56.701465 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:52:56.701471 | orchestrator | 2026-03-19 02:52:56.701478 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 02:52:56.701485 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:52:56.701494 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-19 02:52:56.701501 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-19 02:52:56.701508 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:52:56.701514 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:52:56.701521 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-19 02:52:56.701528 | orchestrator | 2026-03-19 02:52:56.701535 | orchestrator | 2026-03-19 02:52:56.701544 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 02:52:56.701553 | orchestrator | Thursday 19 March 2026 02:52:56 +0000 (0:00:55.507) 0:04:18.133 ******** 2026-03-19 02:52:56.701563 | orchestrator | =============================================================================== 2026-03-19 02:52:56.701571 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.51s 2026-03-19 02:52:56.701580 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.02s 2026-03-19 02:52:56.701589 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.90s 2026-03-19 02:52:56.701621 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.66s 2026-03-19 02:52:56.701634 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.09s 2026-03-19 02:52:56.701643 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.39s 2026-03-19 02:52:56.701654 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.29s 2026-03-19 02:52:56.701663 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.12s 2026-03-19 02:52:56.701672 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.65s 2026-03-19 02:52:56.701679 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.57s 2026-03-19 02:52:56.701686 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.44s 2026-03-19 02:52:56.701692 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.39s 2026-03-19 02:52:56.701699 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.30s 2026-03-19 02:52:56.701708 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.07s 2026-03-19 02:52:56.701717 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.76s 2026-03-19 02:52:56.701735 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.63s 2026-03-19 02:52:56.701746 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.57s 2026-03-19 02:52:56.701756 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.47s 2026-03-19 02:52:56.701764 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.47s 2026-03-19 02:52:56.701772 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 2.42s 2026-03-19 02:52:59.089285 | orchestrator | 2026-03-19 02:52:59 | INFO  | Task f2ddb0b5-73cc-409f-8a5e-58ab5e50cac0 (nova) was prepared for execution. 2026-03-19 02:52:59.089405 | orchestrator | 2026-03-19 02:52:59 | INFO  | It takes a moment until task f2ddb0b5-73cc-409f-8a5e-58ab5e50cac0 (nova) has been started and output is visible here. 2026-03-19 02:55:07.966638 | orchestrator | 2026-03-19 02:55:07.966744 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 02:55:07.966757 | orchestrator | 2026-03-19 02:55:07.966769 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-19 02:55:07.966780 | orchestrator | Thursday 19 March 2026 02:53:03 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-19 02:55:07.966789 | orchestrator | changed: [testbed-manager] 2026-03-19 02:55:07.966800 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.966808 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:55:07.966823 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:55:07.966837 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:55:07.966847 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:55:07.966857 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:55:07.966867 | orchestrator | 2026-03-19 02:55:07.966877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 02:55:07.966886 | orchestrator | Thursday 19 March 2026 02:53:04 +0000 (0:00:00.873) 0:00:01.153 ******** 2026-03-19 02:55:07.966896 | orchestrator | changed: [testbed-manager] 2026-03-19 02:55:07.966906 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.966916 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:55:07.966926 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:55:07.966937 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:55:07.966948 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:55:07.966959 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:55:07.966970 | orchestrator | 2026-03-19 02:55:07.966981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 02:55:07.966992 | orchestrator | Thursday 19 March 2026 02:53:05 +0000 (0:00:00.860) 0:00:02.014 ******** 2026-03-19 02:55:07.967001 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-19 02:55:07.967008 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-19 02:55:07.967016 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-19 02:55:07.967026 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-19 02:55:07.967033 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-19 02:55:07.967039 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-19 02:55:07.967046 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-19 02:55:07.967052 | orchestrator | 2026-03-19 02:55:07.967058 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-19 02:55:07.967064 | orchestrator | 2026-03-19 02:55:07.967071 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-19 02:55:07.967077 | orchestrator | Thursday 19 March 2026 02:53:05 +0000 (0:00:00.722) 0:00:02.737 ******** 2026-03-19 02:55:07.967083 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:55:07.967090 | orchestrator | 2026-03-19 02:55:07.967096 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-19 02:55:07.967104 | orchestrator | Thursday 19 March 2026 02:53:06 +0000 (0:00:00.772) 0:00:03.510 ******** 2026-03-19 02:55:07.967133 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-19 02:55:07.967141 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-19 02:55:07.967148 | orchestrator | 2026-03-19 02:55:07.967155 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-19 02:55:07.967162 | orchestrator | Thursday 19 March 2026 02:53:11 +0000 (0:00:04.648) 0:00:08.159 ******** 2026-03-19 02:55:07.967169 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:55:07.967177 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 02:55:07.967232 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967240 | orchestrator | 2026-03-19 02:55:07.967248 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-19 02:55:07.967255 | orchestrator | Thursday 19 March 2026 02:53:16 +0000 (0:00:04.878) 0:00:13.037 ******** 2026-03-19 02:55:07.967264 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967271 | orchestrator | 2026-03-19 02:55:07.967278 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-19 02:55:07.967285 | orchestrator | Thursday 19 March 2026 02:53:16 +0000 (0:00:00.624) 0:00:13.662 ******** 2026-03-19 02:55:07.967291 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967297 | orchestrator | 2026-03-19 02:55:07.967303 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-19 02:55:07.967315 | orchestrator | Thursday 19 March 2026 02:53:18 +0000 (0:00:01.368) 0:00:15.030 ******** 2026-03-19 02:55:07.967326 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967337 | orchestrator | 2026-03-19 02:55:07.967348 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 02:55:07.967358 | orchestrator | Thursday 19 March 2026 02:53:20 +0000 (0:00:02.602) 0:00:17.633 ******** 2026-03-19 02:55:07.967368 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.967378 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967388 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967397 | orchestrator | 2026-03-19 02:55:07.967408 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-19 02:55:07.967418 | orchestrator | Thursday 19 March 2026 02:53:21 +0000 (0:00:00.297) 0:00:17.930 ******** 2026-03-19 02:55:07.967429 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:55:07.967440 | orchestrator | 2026-03-19 02:55:07.967450 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-19 02:55:07.967461 | orchestrator | Thursday 19 March 2026 02:53:56 +0000 (0:00:35.357) 0:00:53.288 ******** 2026-03-19 02:55:07.967472 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967482 | orchestrator | 2026-03-19 02:55:07.967491 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 02:55:07.967518 | orchestrator | Thursday 19 March 2026 02:54:12 +0000 (0:00:16.240) 0:01:09.528 ******** 2026-03-19 02:55:07.967529 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:55:07.967539 | orchestrator | 2026-03-19 02:55:07.967549 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 02:55:07.967576 | orchestrator | Thursday 19 March 2026 02:54:25 +0000 (0:00:13.132) 0:01:22.661 ******** 2026-03-19 02:55:07.967607 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:55:07.967618 | orchestrator | 2026-03-19 02:55:07.967628 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-19 02:55:07.967638 | orchestrator | Thursday 19 March 2026 02:54:26 +0000 (0:00:00.673) 0:01:23.334 ******** 2026-03-19 02:55:07.967648 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.967657 | orchestrator | 2026-03-19 02:55:07.967666 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 02:55:07.967676 | orchestrator | Thursday 19 March 2026 02:54:27 +0000 (0:00:00.481) 0:01:23.815 ******** 2026-03-19 02:55:07.967687 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:55:07.967710 | orchestrator | 2026-03-19 02:55:07.967720 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-19 02:55:07.967731 | orchestrator | Thursday 19 March 2026 02:54:27 +0000 (0:00:00.700) 0:01:24.516 ******** 2026-03-19 02:55:07.967740 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:55:07.967750 | orchestrator | 2026-03-19 02:55:07.967760 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-19 02:55:07.967767 | orchestrator | Thursday 19 March 2026 02:54:47 +0000 (0:00:20.196) 0:01:44.712 ******** 2026-03-19 02:55:07.967773 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.967779 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967785 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967791 | orchestrator | 2026-03-19 02:55:07.967797 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-19 02:55:07.967803 | orchestrator | 2026-03-19 02:55:07.967810 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-19 02:55:07.967816 | orchestrator | Thursday 19 March 2026 02:54:48 +0000 (0:00:00.324) 0:01:45.037 ******** 2026-03-19 02:55:07.967822 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:55:07.967828 | orchestrator | 2026-03-19 02:55:07.967834 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-19 02:55:07.967840 | orchestrator | Thursday 19 March 2026 02:54:49 +0000 (0:00:00.766) 0:01:45.804 ******** 2026-03-19 02:55:07.967846 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967852 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967859 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967865 | orchestrator | 2026-03-19 02:55:07.967871 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-19 02:55:07.967877 | orchestrator | Thursday 19 March 2026 02:54:51 +0000 (0:00:02.235) 0:01:48.040 ******** 2026-03-19 02:55:07.967883 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967889 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967895 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.967901 | orchestrator | 2026-03-19 02:55:07.967907 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-19 02:55:07.967914 | orchestrator | Thursday 19 March 2026 02:54:53 +0000 (0:00:02.487) 0:01:50.527 ******** 2026-03-19 02:55:07.967920 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.967926 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967932 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967938 | orchestrator | 2026-03-19 02:55:07.967944 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-19 02:55:07.967950 | orchestrator | Thursday 19 March 2026 02:54:54 +0000 (0:00:00.541) 0:01:51.069 ******** 2026-03-19 02:55:07.967956 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 02:55:07.967963 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.967969 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 02:55:07.967975 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.967981 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 02:55:07.967988 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-19 02:55:07.967994 | orchestrator | 2026-03-19 02:55:07.968000 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-19 02:55:07.968007 | orchestrator | Thursday 19 March 2026 02:55:02 +0000 (0:00:08.263) 0:01:59.332 ******** 2026-03-19 02:55:07.968013 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.968019 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.968025 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.968031 | orchestrator | 2026-03-19 02:55:07.968037 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-19 02:55:07.968043 | orchestrator | Thursday 19 March 2026 02:55:02 +0000 (0:00:00.334) 0:01:59.666 ******** 2026-03-19 02:55:07.968049 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 02:55:07.968060 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:55:07.968066 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 02:55:07.968073 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.968079 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 02:55:07.968085 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.968091 | orchestrator | 2026-03-19 02:55:07.968097 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-19 02:55:07.968103 | orchestrator | Thursday 19 March 2026 02:55:04 +0000 (0:00:01.101) 0:02:00.768 ******** 2026-03-19 02:55:07.968109 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.968116 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.968122 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.968128 | orchestrator | 2026-03-19 02:55:07.968137 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-19 02:55:07.968147 | orchestrator | Thursday 19 March 2026 02:55:04 +0000 (0:00:00.478) 0:02:01.246 ******** 2026-03-19 02:55:07.968157 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.968167 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.968177 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:55:07.968186 | orchestrator | 2026-03-19 02:55:07.968194 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-19 02:55:07.968204 | orchestrator | Thursday 19 March 2026 02:55:05 +0000 (0:00:01.037) 0:02:02.284 ******** 2026-03-19 02:55:07.968214 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:55:07.968224 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:55:07.968243 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:56:33.492498 | orchestrator | 2026-03-19 02:56:33.492595 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-19 02:56:33.492607 | orchestrator | Thursday 19 March 2026 02:55:07 +0000 (0:00:02.409) 0:02:04.694 ******** 2026-03-19 02:56:33.492615 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492623 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492630 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:56:33.492638 | orchestrator | 2026-03-19 02:56:33.492645 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 02:56:33.492652 | orchestrator | Thursday 19 March 2026 02:55:30 +0000 (0:00:22.783) 0:02:27.478 ******** 2026-03-19 02:56:33.492659 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492666 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492673 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:56:33.492679 | orchestrator | 2026-03-19 02:56:33.492686 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 02:56:33.492693 | orchestrator | Thursday 19 March 2026 02:55:44 +0000 (0:00:13.517) 0:02:40.995 ******** 2026-03-19 02:56:33.492700 | orchestrator | ok: [testbed-node-0] 2026-03-19 02:56:33.492706 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492713 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492720 | orchestrator | 2026-03-19 02:56:33.492726 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-19 02:56:33.492733 | orchestrator | Thursday 19 March 2026 02:55:45 +0000 (0:00:01.126) 0:02:42.122 ******** 2026-03-19 02:56:33.492740 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492747 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492753 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:56:33.492760 | orchestrator | 2026-03-19 02:56:33.492767 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-19 02:56:33.492773 | orchestrator | Thursday 19 March 2026 02:55:59 +0000 (0:00:14.173) 0:02:56.295 ******** 2026-03-19 02:56:33.492780 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:33.492787 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492793 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492800 | orchestrator | 2026-03-19 02:56:33.492807 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-19 02:56:33.492836 | orchestrator | Thursday 19 March 2026 02:56:00 +0000 (0:00:00.943) 0:02:57.238 ******** 2026-03-19 02:56:33.492843 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:33.492849 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:33.492856 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:33.492863 | orchestrator | 2026-03-19 02:56:33.492869 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-19 02:56:33.492876 | orchestrator | 2026-03-19 02:56:33.492882 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 02:56:33.492889 | orchestrator | Thursday 19 March 2026 02:56:00 +0000 (0:00:00.312) 0:02:57.550 ******** 2026-03-19 02:56:33.492938 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:56:33.492947 | orchestrator | 2026-03-19 02:56:33.492954 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-19 02:56:33.492960 | orchestrator | Thursday 19 March 2026 02:56:01 +0000 (0:00:00.746) 0:02:58.297 ******** 2026-03-19 02:56:33.492967 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-19 02:56:33.492974 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-19 02:56:33.492980 | orchestrator | 2026-03-19 02:56:33.492987 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-19 02:56:33.492994 | orchestrator | Thursday 19 March 2026 02:56:05 +0000 (0:00:03.745) 0:03:02.043 ******** 2026-03-19 02:56:33.493001 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-19 02:56:33.493009 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-19 02:56:33.493016 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-19 02:56:33.493024 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-19 02:56:33.493030 | orchestrator | 2026-03-19 02:56:33.493037 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-19 02:56:33.493044 | orchestrator | Thursday 19 March 2026 02:56:12 +0000 (0:00:07.106) 0:03:09.149 ******** 2026-03-19 02:56:33.493050 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 02:56:33.493057 | orchestrator | 2026-03-19 02:56:33.493063 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-19 02:56:33.493070 | orchestrator | Thursday 19 March 2026 02:56:15 +0000 (0:00:03.460) 0:03:12.609 ******** 2026-03-19 02:56:33.493076 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 02:56:33.493083 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-19 02:56:33.493090 | orchestrator | 2026-03-19 02:56:33.493096 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-19 02:56:33.493103 | orchestrator | Thursday 19 March 2026 02:56:20 +0000 (0:00:04.232) 0:03:16.842 ******** 2026-03-19 02:56:33.493109 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 02:56:33.493116 | orchestrator | 2026-03-19 02:56:33.493122 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-19 02:56:33.493129 | orchestrator | Thursday 19 March 2026 02:56:23 +0000 (0:00:03.575) 0:03:20.418 ******** 2026-03-19 02:56:33.493135 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-19 02:56:33.493142 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-19 02:56:33.493148 | orchestrator | 2026-03-19 02:56:33.493159 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-19 02:56:33.493178 | orchestrator | Thursday 19 March 2026 02:56:32 +0000 (0:00:08.442) 0:03:28.860 ******** 2026-03-19 02:56:33.493189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:33.493209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:33.493218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:33.493235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.071665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.071762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.071776 | orchestrator | 2026-03-19 02:56:38.071788 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-19 02:56:38.071798 | orchestrator | Thursday 19 March 2026 02:56:33 +0000 (0:00:01.365) 0:03:30.226 ******** 2026-03-19 02:56:38.071807 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:38.071817 | orchestrator | 2026-03-19 02:56:38.071826 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-19 02:56:38.071834 | orchestrator | Thursday 19 March 2026 02:56:33 +0000 (0:00:00.132) 0:03:30.358 ******** 2026-03-19 02:56:38.071843 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:38.071852 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:38.071860 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:38.071869 | orchestrator | 2026-03-19 02:56:38.071878 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-19 02:56:38.071886 | orchestrator | Thursday 19 March 2026 02:56:33 +0000 (0:00:00.302) 0:03:30.661 ******** 2026-03-19 02:56:38.071895 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 02:56:38.071904 | orchestrator | 2026-03-19 02:56:38.071912 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-19 02:56:38.071921 | orchestrator | Thursday 19 March 2026 02:56:34 +0000 (0:00:00.696) 0:03:31.357 ******** 2026-03-19 02:56:38.071930 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:38.071939 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:38.071947 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:38.071956 | orchestrator | 2026-03-19 02:56:38.071964 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-19 02:56:38.071973 | orchestrator | Thursday 19 March 2026 02:56:35 +0000 (0:00:00.541) 0:03:31.899 ******** 2026-03-19 02:56:38.071982 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:56:38.071992 | orchestrator | 2026-03-19 02:56:38.072001 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-19 02:56:38.072010 | orchestrator | Thursday 19 March 2026 02:56:35 +0000 (0:00:00.552) 0:03:32.451 ******** 2026-03-19 02:56:38.072040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:38.072095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:38.072108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:38.072118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.072127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.072148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:38.072158 | orchestrator | 2026-03-19 02:56:38.072173 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-19 02:56:39.725856 | orchestrator | Thursday 19 March 2026 02:56:38 +0000 (0:00:02.352) 0:03:34.804 ******** 2026-03-19 02:56:39.725991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:39.726075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:39.726099 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:39.726112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:39.726164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:39.726174 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:39.726207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:39.726219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:39.726229 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:39.726245 | orchestrator | 2026-03-19 02:56:39.726259 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-19 02:56:39.726273 | orchestrator | Thursday 19 March 2026 02:56:38 +0000 (0:00:00.850) 0:03:35.654 ******** 2026-03-19 02:56:39.726288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:39.726316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:39.726331 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:39.726392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:42.122469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:42.122589 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:42.122612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:42.122660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:42.122676 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:42.122689 | orchestrator | 2026-03-19 02:56:42.122703 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-19 02:56:42.122716 | orchestrator | Thursday 19 March 2026 02:56:39 +0000 (0:00:00.806) 0:03:36.461 ******** 2026-03-19 02:56:42.122748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:42.122786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:42.122802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:42.122831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:42.122847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:42.122870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:48.415303 | orchestrator | 2026-03-19 02:56:48.415440 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-19 02:56:48.415454 | orchestrator | Thursday 19 March 2026 02:56:42 +0000 (0:00:02.392) 0:03:38.853 ******** 2026-03-19 02:56:48.415467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:48.415503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:48.415528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:48.415554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:48.415564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:48.415577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:48.415585 | orchestrator | 2026-03-19 02:56:48.415592 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-19 02:56:48.415600 | orchestrator | Thursday 19 March 2026 02:56:47 +0000 (0:00:05.700) 0:03:44.554 ******** 2026-03-19 02:56:48.415612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:48.415620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:48.415628 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:48.415643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:52.868941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:52.869081 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:52.869112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-19 02:56:52.869162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 02:56:52.869184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:52.869196 | orchestrator | 2026-03-19 02:56:52.869208 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-19 02:56:52.869220 | orchestrator | Thursday 19 March 2026 02:56:48 +0000 (0:00:00.597) 0:03:45.152 ******** 2026-03-19 02:56:52.869231 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:56:52.869241 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:56:52.869252 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:56:52.869263 | orchestrator | 2026-03-19 02:56:52.869273 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-19 02:56:52.869284 | orchestrator | Thursday 19 March 2026 02:56:50 +0000 (0:00:01.680) 0:03:46.833 ******** 2026-03-19 02:56:52.869295 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:56:52.869306 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:56:52.869346 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:56:52.869357 | orchestrator | 2026-03-19 02:56:52.869367 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-19 02:56:52.869378 | orchestrator | Thursday 19 March 2026 02:56:50 +0000 (0:00:00.349) 0:03:47.183 ******** 2026-03-19 02:56:52.869410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:52.869459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:52.869482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-19 02:56:52.869496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:52.869517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:56:52.869538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:35.369238 | orchestrator | 2026-03-19 02:57:35.369458 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 02:57:35.369468 | orchestrator | Thursday 19 March 2026 02:56:52 +0000 (0:00:01.981) 0:03:49.164 ******** 2026-03-19 02:57:35.369472 | orchestrator | 2026-03-19 02:57:35.369477 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 02:57:35.369481 | orchestrator | Thursday 19 March 2026 02:56:52 +0000 (0:00:00.153) 0:03:49.317 ******** 2026-03-19 02:57:35.369485 | orchestrator | 2026-03-19 02:57:35.369490 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-19 02:57:35.369494 | orchestrator | Thursday 19 March 2026 02:56:52 +0000 (0:00:00.142) 0:03:49.459 ******** 2026-03-19 02:57:35.369498 | orchestrator | 2026-03-19 02:57:35.369501 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-19 02:57:35.369505 | orchestrator | Thursday 19 March 2026 02:56:52 +0000 (0:00:00.141) 0:03:49.601 ******** 2026-03-19 02:57:35.369509 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:57:35.369514 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:57:35.369518 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:57:35.369521 | orchestrator | 2026-03-19 02:57:35.369525 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-19 02:57:35.369529 | orchestrator | Thursday 19 March 2026 02:57:13 +0000 (0:00:20.603) 0:04:10.205 ******** 2026-03-19 02:57:35.369533 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:57:35.369539 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:57:35.369546 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:57:35.369552 | orchestrator | 2026-03-19 02:57:35.369558 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-19 02:57:35.369565 | orchestrator | 2026-03-19 02:57:35.369568 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 02:57:35.369572 | orchestrator | Thursday 19 March 2026 02:57:23 +0000 (0:00:10.205) 0:04:20.411 ******** 2026-03-19 02:57:35.369577 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:57:35.369582 | orchestrator | 2026-03-19 02:57:35.369600 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 02:57:35.369604 | orchestrator | Thursday 19 March 2026 02:57:24 +0000 (0:00:01.210) 0:04:21.621 ******** 2026-03-19 02:57:35.369607 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:57:35.369636 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:57:35.369643 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:57:35.369649 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:35.369664 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:35.369675 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:35.369679 | orchestrator | 2026-03-19 02:57:35.369683 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-19 02:57:35.369687 | orchestrator | Thursday 19 March 2026 02:57:25 +0000 (0:00:00.760) 0:04:22.381 ******** 2026-03-19 02:57:35.369690 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:35.369724 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:35.369729 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:35.369733 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:57:35.369738 | orchestrator | 2026-03-19 02:57:35.369742 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 02:57:35.369745 | orchestrator | Thursday 19 March 2026 02:57:26 +0000 (0:00:00.836) 0:04:23.218 ******** 2026-03-19 02:57:35.369765 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-19 02:57:35.369769 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-19 02:57:35.369773 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-19 02:57:35.369777 | orchestrator | 2026-03-19 02:57:35.369780 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 02:57:35.369784 | orchestrator | Thursday 19 March 2026 02:57:27 +0000 (0:00:00.876) 0:04:24.095 ******** 2026-03-19 02:57:35.369789 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-19 02:57:35.369793 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-19 02:57:35.369797 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-19 02:57:35.369801 | orchestrator | 2026-03-19 02:57:35.369806 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 02:57:35.369810 | orchestrator | Thursday 19 March 2026 02:57:28 +0000 (0:00:01.230) 0:04:25.325 ******** 2026-03-19 02:57:35.369814 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-19 02:57:35.369832 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:57:35.369837 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-19 02:57:35.369842 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:57:35.369846 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-19 02:57:35.369863 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:57:35.369867 | orchestrator | 2026-03-19 02:57:35.369871 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-19 02:57:35.369876 | orchestrator | Thursday 19 March 2026 02:57:29 +0000 (0:00:00.544) 0:04:25.870 ******** 2026-03-19 02:57:35.369880 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 02:57:35.369884 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 02:57:35.369889 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 02:57:35.369893 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 02:57:35.369897 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:35.369901 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 02:57:35.369905 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 02:57:35.369910 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:35.369936 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 02:57:35.369947 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 02:57:35.369953 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:35.369960 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-19 02:57:35.369974 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 02:57:35.369981 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 02:57:35.369986 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-19 02:57:35.369993 | orchestrator | 2026-03-19 02:57:35.370002 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-19 02:57:35.370010 | orchestrator | Thursday 19 March 2026 02:57:30 +0000 (0:00:01.332) 0:04:27.203 ******** 2026-03-19 02:57:35.370064 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:35.370068 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:35.370073 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:35.370078 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:57:35.370082 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:57:35.370086 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:57:35.370092 | orchestrator | 2026-03-19 02:57:35.370098 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-19 02:57:35.370104 | orchestrator | Thursday 19 March 2026 02:57:31 +0000 (0:00:01.255) 0:04:28.459 ******** 2026-03-19 02:57:35.370111 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:35.370120 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:35.370128 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:35.370133 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:57:35.370140 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:57:35.370146 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:57:35.370152 | orchestrator | 2026-03-19 02:57:35.370158 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-19 02:57:35.370164 | orchestrator | Thursday 19 March 2026 02:57:33 +0000 (0:00:01.776) 0:04:30.235 ******** 2026-03-19 02:57:35.370180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:35.370190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:35.370204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.015992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:37.016004 | orchestrator | 2026-03-19 02:57:37.016012 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 02:57:37.016021 | orchestrator | Thursday 19 March 2026 02:57:35 +0000 (0:00:02.241) 0:04:32.476 ******** 2026-03-19 02:57:37.016028 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 02:57:37.016035 | orchestrator | 2026-03-19 02:57:37.016041 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-19 02:57:37.016052 | orchestrator | Thursday 19 March 2026 02:57:37 +0000 (0:00:01.271) 0:04:33.748 ******** 2026-03-19 02:57:40.431425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:40.431689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:41.997919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:41.998109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:57:41.998129 | orchestrator | 2026-03-19 02:57:41.998141 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-19 02:57:41.998154 | orchestrator | Thursday 19 March 2026 02:57:40 +0000 (0:00:03.619) 0:04:37.367 ******** 2026-03-19 02:57:41.998189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:41.998202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:41.998213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:41.998223 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:57:41.998307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:41.998320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:41.998331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:41.998349 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:57:41.998360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:41.998370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:41.998388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:43.798087 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:57:43.798220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:43.798286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:43.798323 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:43.798336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:43.798348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:43.798360 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:43.798371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:43.798382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:43.798394 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:43.798405 | orchestrator | 2026-03-19 02:57:43.798417 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-19 02:57:43.798429 | orchestrator | Thursday 19 March 2026 02:57:42 +0000 (0:00:01.642) 0:04:39.010 ******** 2026-03-19 02:57:43.798465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:43.798487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:43.798500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:43.798513 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:57:43.798524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:43.798536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:43.798561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:51.111405 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:57:51.111545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:57:51.112336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:57:51.112374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:57:51.112387 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:57:51.112401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:51.112412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:51.112422 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:51.112473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:51.112514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:51.112524 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:51.112533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:57:51.112542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:57:51.112552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:51.112562 | orchestrator | 2026-03-19 02:57:51.112572 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 02:57:51.112582 | orchestrator | Thursday 19 March 2026 02:57:44 +0000 (0:00:02.205) 0:04:41.215 ******** 2026-03-19 02:57:51.112591 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:57:51.112600 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:57:51.112609 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:57:51.112619 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 02:57:51.112629 | orchestrator | 2026-03-19 02:57:51.112640 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-19 02:57:51.112649 | orchestrator | Thursday 19 March 2026 02:57:45 +0000 (0:00:00.936) 0:04:42.151 ******** 2026-03-19 02:57:51.112658 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 02:57:51.112667 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 02:57:51.112677 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 02:57:51.112688 | orchestrator | 2026-03-19 02:57:51.112697 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-19 02:57:51.112706 | orchestrator | Thursday 19 March 2026 02:57:46 +0000 (0:00:01.085) 0:04:43.236 ******** 2026-03-19 02:57:51.112716 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 02:57:51.112724 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 02:57:51.112732 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 02:57:51.112740 | orchestrator | 2026-03-19 02:57:51.112748 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-19 02:57:51.112767 | orchestrator | Thursday 19 March 2026 02:57:47 +0000 (0:00:00.947) 0:04:44.184 ******** 2026-03-19 02:57:51.112777 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:57:51.112788 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:57:51.112797 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:57:51.112807 | orchestrator | 2026-03-19 02:57:51.112817 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-19 02:57:51.112826 | orchestrator | Thursday 19 March 2026 02:57:47 +0000 (0:00:00.528) 0:04:44.712 ******** 2026-03-19 02:57:51.112836 | orchestrator | ok: [testbed-node-3] 2026-03-19 02:57:51.112845 | orchestrator | ok: [testbed-node-4] 2026-03-19 02:57:51.112854 | orchestrator | ok: [testbed-node-5] 2026-03-19 02:57:51.112863 | orchestrator | 2026-03-19 02:57:51.112872 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-19 02:57:51.112881 | orchestrator | Thursday 19 March 2026 02:57:48 +0000 (0:00:00.499) 0:04:45.212 ******** 2026-03-19 02:57:51.112891 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 02:57:51.112901 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 02:57:51.112910 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 02:57:51.112919 | orchestrator | 2026-03-19 02:57:51.112929 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-19 02:57:51.112946 | orchestrator | Thursday 19 March 2026 02:57:49 +0000 (0:00:01.387) 0:04:46.599 ******** 2026-03-19 02:57:51.112968 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 02:58:09.559738 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 02:58:09.559859 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 02:58:09.559874 | orchestrator | 2026-03-19 02:58:09.559886 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-19 02:58:09.559897 | orchestrator | Thursday 19 March 2026 02:57:51 +0000 (0:00:01.247) 0:04:47.846 ******** 2026-03-19 02:58:09.559907 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-19 02:58:09.559917 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-19 02:58:09.559926 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-19 02:58:09.559936 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-19 02:58:09.559945 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-19 02:58:09.559957 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-19 02:58:09.559974 | orchestrator | 2026-03-19 02:58:09.559991 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-19 02:58:09.560009 | orchestrator | Thursday 19 March 2026 02:57:54 +0000 (0:00:03.746) 0:04:51.593 ******** 2026-03-19 02:58:09.560025 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:09.560043 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:09.560058 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:09.560068 | orchestrator | 2026-03-19 02:58:09.560078 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-19 02:58:09.560088 | orchestrator | Thursday 19 March 2026 02:57:55 +0000 (0:00:00.316) 0:04:51.909 ******** 2026-03-19 02:58:09.560098 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:09.560107 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:09.560117 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:09.560127 | orchestrator | 2026-03-19 02:58:09.560137 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-19 02:58:09.560147 | orchestrator | Thursday 19 March 2026 02:57:55 +0000 (0:00:00.518) 0:04:52.428 ******** 2026-03-19 02:58:09.560157 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:58:09.560166 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:58:09.560176 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:58:09.560185 | orchestrator | 2026-03-19 02:58:09.560221 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-19 02:58:09.560260 | orchestrator | Thursday 19 March 2026 02:57:56 +0000 (0:00:01.275) 0:04:53.703 ******** 2026-03-19 02:58:09.560274 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 02:58:09.560289 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 02:58:09.560301 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-19 02:58:09.560312 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 02:58:09.560324 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 02:58:09.560335 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-19 02:58:09.560346 | orchestrator | 2026-03-19 02:58:09.560358 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-19 02:58:09.560370 | orchestrator | Thursday 19 March 2026 02:58:00 +0000 (0:00:03.402) 0:04:57.106 ******** 2026-03-19 02:58:09.560381 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:58:09.560393 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:58:09.560404 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:58:09.560415 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-19 02:58:09.560427 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:58:09.560438 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-19 02:58:09.560449 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:58:09.560460 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-19 02:58:09.560471 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:58:09.560482 | orchestrator | 2026-03-19 02:58:09.560493 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-19 02:58:09.560505 | orchestrator | Thursday 19 March 2026 02:58:03 +0000 (0:00:03.496) 0:05:00.603 ******** 2026-03-19 02:58:09.560516 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:09.560528 | orchestrator | 2026-03-19 02:58:09.560540 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-19 02:58:09.560551 | orchestrator | Thursday 19 March 2026 02:58:03 +0000 (0:00:00.126) 0:05:00.729 ******** 2026-03-19 02:58:09.560562 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:09.560573 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:09.560584 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:09.560596 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:09.560608 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:09.560619 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:09.560630 | orchestrator | 2026-03-19 02:58:09.560642 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-19 02:58:09.560653 | orchestrator | Thursday 19 March 2026 02:58:04 +0000 (0:00:00.802) 0:05:01.532 ******** 2026-03-19 02:58:09.560663 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 02:58:09.560672 | orchestrator | 2026-03-19 02:58:09.560698 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-19 02:58:09.560708 | orchestrator | Thursday 19 March 2026 02:58:05 +0000 (0:00:00.655) 0:05:02.187 ******** 2026-03-19 02:58:09.560718 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:09.560744 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:09.560754 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:09.560764 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:09.560774 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:09.560783 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:09.560793 | orchestrator | 2026-03-19 02:58:09.560803 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-19 02:58:09.560820 | orchestrator | Thursday 19 March 2026 02:58:06 +0000 (0:00:00.774) 0:05:02.962 ******** 2026-03-19 02:58:09.560834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:09.560848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:09.560859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:09.560870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:09.560894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:14.064917 | orchestrator | 2026-03-19 02:58:14.064924 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-19 02:58:14.064932 | orchestrator | Thursday 19 March 2026 02:58:09 +0000 (0:00:03.578) 0:05:06.540 ******** 2026-03-19 02:58:14.064940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:14.064951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:14.064970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:16.269568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:16.269669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:16.269678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:16.269685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:58:16.269805 | orchestrator | 2026-03-19 02:58:16.269810 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-19 02:58:16.269818 | orchestrator | Thursday 19 March 2026 02:58:16 +0000 (0:00:06.461) 0:05:13.002 ******** 2026-03-19 02:58:37.233539 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:37.233654 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:37.233674 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:37.233688 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.233702 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.233716 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.233730 | orchestrator | 2026-03-19 02:58:37.233745 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-19 02:58:37.233760 | orchestrator | Thursday 19 March 2026 02:58:17 +0000 (0:00:01.382) 0:05:14.384 ******** 2026-03-19 02:58:37.233774 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 02:58:37.233787 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 02:58:37.233801 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-19 02:58:37.233814 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 02:58:37.233827 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 02:58:37.233840 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 02:58:37.233855 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.233869 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 02:58:37.233882 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.233895 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-19 02:58:37.233908 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-19 02:58:37.233921 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.233934 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 02:58:37.233979 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 02:58:37.233993 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-19 02:58:37.234008 | orchestrator | 2026-03-19 02:58:37.234071 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-19 02:58:37.234085 | orchestrator | Thursday 19 March 2026 02:58:21 +0000 (0:00:03.606) 0:05:17.991 ******** 2026-03-19 02:58:37.234098 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:37.234111 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:37.234124 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:37.234138 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.234152 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.234252 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.234266 | orchestrator | 2026-03-19 02:58:37.234280 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-19 02:58:37.234293 | orchestrator | Thursday 19 March 2026 02:58:21 +0000 (0:00:00.631) 0:05:18.623 ******** 2026-03-19 02:58:37.234306 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 02:58:37.234319 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 02:58:37.234333 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 02:58:37.234346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-19 02:58:37.234359 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234388 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 02:58:37.234403 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234416 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-19 02:58:37.234430 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234443 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234456 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.234469 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234482 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.234495 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-19 02:58:37.234508 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.234522 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234535 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234569 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234583 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234596 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234609 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-19 02:58:37.234623 | orchestrator | 2026-03-19 02:58:37.234636 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-19 02:58:37.234661 | orchestrator | Thursday 19 March 2026 02:58:27 +0000 (0:00:05.292) 0:05:23.916 ******** 2026-03-19 02:58:37.234675 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:58:37.234688 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:58:37.234701 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 02:58:37.234715 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:58:37.234728 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:58:37.234741 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 02:58:37.234754 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-19 02:58:37.234768 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 02:58:37.234781 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-19 02:58:37.234794 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:58:37.234808 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:58:37.234821 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 02:58:37.234834 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 02:58:37.234847 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.234860 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:58:37.234873 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 02:58:37.234887 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.234900 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:58:37.234914 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-19 02:58:37.234927 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.234939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-19 02:58:37.234952 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:58:37.234965 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:58:37.234978 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-19 02:58:37.234991 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:58:37.235005 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:58:37.235024 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-19 02:58:37.235037 | orchestrator | 2026-03-19 02:58:37.235049 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-19 02:58:37.235062 | orchestrator | Thursday 19 March 2026 02:58:33 +0000 (0:00:06.811) 0:05:30.727 ******** 2026-03-19 02:58:37.235076 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:37.235088 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:37.235100 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:37.235112 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.235125 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.235137 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.235149 | orchestrator | 2026-03-19 02:58:37.235182 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-19 02:58:37.235206 | orchestrator | Thursday 19 March 2026 02:58:34 +0000 (0:00:00.760) 0:05:31.487 ******** 2026-03-19 02:58:37.235218 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:37.235229 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:37.235242 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:37.235255 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.235269 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.235280 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.235294 | orchestrator | 2026-03-19 02:58:37.235306 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-19 02:58:37.235319 | orchestrator | Thursday 19 March 2026 02:58:35 +0000 (0:00:00.615) 0:05:32.103 ******** 2026-03-19 02:58:37.235331 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:37.235344 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:37.235358 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:37.235371 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:58:37.235385 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:58:37.235399 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:58:37.235412 | orchestrator | 2026-03-19 02:58:37.235433 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-19 02:58:38.243453 | orchestrator | Thursday 19 March 2026 02:58:37 +0000 (0:00:01.858) 0:05:33.962 ******** 2026-03-19 02:58:38.243592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:38.243623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:38.243644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:58:38.243665 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:38.243736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:38.243872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:38.243924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:58:38.243946 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:38.243966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-19 02:58:38.243985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-19 02:58:38.244036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-19 02:58:38.244073 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:38.244093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:58:38.244126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:58:41.550986 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:41.551113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:58:41.551132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:58:41.551180 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:41.551243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-19 02:58:41.551260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 02:58:41.551309 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:41.551327 | orchestrator | 2026-03-19 02:58:41.551343 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-19 02:58:41.551353 | orchestrator | Thursday 19 March 2026 02:58:38 +0000 (0:00:01.194) 0:05:35.156 ******** 2026-03-19 02:58:41.551363 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-19 02:58:41.551386 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551395 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:58:41.551417 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-19 02:58:41.551426 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551444 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:58:41.551454 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-19 02:58:41.551463 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551472 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:58:41.551481 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-19 02:58:41.551489 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551498 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:58:41.551507 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-19 02:58:41.551516 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551527 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:58:41.551536 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-19 02:58:41.551546 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-19 02:58:41.551556 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:58:41.551568 | orchestrator | 2026-03-19 02:58:41.551584 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-19 02:58:41.551610 | orchestrator | Thursday 19 March 2026 02:58:39 +0000 (0:00:00.788) 0:05:35.945 ******** 2026-03-19 02:58:41.551650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:41.551669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:41.551698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-19 02:58:41.551723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:41.551742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:58:41.551769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-19 02:59:27.937818 | orchestrator | 2026-03-19 02:59:27.937826 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-19 02:59:27.937834 | orchestrator | Thursday 19 March 2026 02:58:41 +0000 (0:00:02.584) 0:05:38.529 ******** 2026-03-19 02:59:27.937841 | orchestrator | skipping: [testbed-node-3] 2026-03-19 02:59:27.937848 | orchestrator | skipping: [testbed-node-4] 2026-03-19 02:59:27.937855 | orchestrator | skipping: [testbed-node-5] 2026-03-19 02:59:27.937861 | orchestrator | skipping: [testbed-node-0] 2026-03-19 02:59:27.937868 | orchestrator | skipping: [testbed-node-1] 2026-03-19 02:59:27.937876 | orchestrator | skipping: [testbed-node-2] 2026-03-19 02:59:27.937882 | orchestrator | 2026-03-19 02:59:27.937889 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.937895 | orchestrator | Thursday 19 March 2026 02:58:42 +0000 (0:00:00.810) 0:05:39.339 ******** 2026-03-19 02:59:27.937901 | orchestrator | 2026-03-19 02:59:27.937907 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.937913 | orchestrator | Thursday 19 March 2026 02:58:42 +0000 (0:00:00.138) 0:05:39.478 ******** 2026-03-19 02:59:27.937921 | orchestrator | 2026-03-19 02:59:27.937932 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.937939 | orchestrator | Thursday 19 March 2026 02:58:42 +0000 (0:00:00.149) 0:05:39.627 ******** 2026-03-19 02:59:27.937945 | orchestrator | 2026-03-19 02:59:27.937952 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.937958 | orchestrator | Thursday 19 March 2026 02:58:43 +0000 (0:00:00.142) 0:05:39.770 ******** 2026-03-19 02:59:27.937964 | orchestrator | 2026-03-19 02:59:27.937971 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.937977 | orchestrator | Thursday 19 March 2026 02:58:43 +0000 (0:00:00.139) 0:05:39.910 ******** 2026-03-19 02:59:27.937983 | orchestrator | 2026-03-19 02:59:27.937990 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-19 02:59:27.938001 | orchestrator | Thursday 19 March 2026 02:58:43 +0000 (0:00:00.310) 0:05:40.220 ******** 2026-03-19 02:59:27.938007 | orchestrator | 2026-03-19 02:59:27.938069 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-19 02:59:27.938077 | orchestrator | Thursday 19 March 2026 02:58:43 +0000 (0:00:00.157) 0:05:40.377 ******** 2026-03-19 02:59:27.938083 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:59:27.938090 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:59:27.938123 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:59:27.938131 | orchestrator | 2026-03-19 02:59:27.938138 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-19 02:59:27.938145 | orchestrator | Thursday 19 March 2026 02:58:55 +0000 (0:00:11.607) 0:05:51.985 ******** 2026-03-19 02:59:27.938153 | orchestrator | changed: [testbed-node-0] 2026-03-19 02:59:27.938160 | orchestrator | changed: [testbed-node-1] 2026-03-19 02:59:27.938166 | orchestrator | changed: [testbed-node-2] 2026-03-19 02:59:27.938180 | orchestrator | 2026-03-19 02:59:27.938187 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-19 02:59:27.938192 | orchestrator | Thursday 19 March 2026 02:59:08 +0000 (0:00:13.299) 0:06:05.285 ******** 2026-03-19 02:59:27.938199 | orchestrator | changed: [testbed-node-3] 2026-03-19 02:59:27.938205 | orchestrator | changed: [testbed-node-5] 2026-03-19 02:59:27.938211 | orchestrator | changed: [testbed-node-4] 2026-03-19 02:59:27.938218 | orchestrator | 2026-03-19 02:59:27.938232 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-19 03:01:40.455007 | orchestrator | Thursday 19 March 2026 02:59:27 +0000 (0:00:19.381) 0:06:24.666 ******** 2026-03-19 03:01:40.455136 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:01:40.455152 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:01:40.455161 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:01:40.455170 | orchestrator | 2026-03-19 03:01:40.455179 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-19 03:01:40.455188 | orchestrator | Thursday 19 March 2026 03:00:04 +0000 (0:00:36.229) 0:07:00.895 ******** 2026-03-19 03:01:40.455193 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:01:40.455198 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:01:40.455203 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:01:40.455208 | orchestrator | 2026-03-19 03:01:40.455213 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-19 03:01:40.455218 | orchestrator | Thursday 19 March 2026 03:00:04 +0000 (0:00:00.775) 0:07:01.671 ******** 2026-03-19 03:01:40.455223 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:01:40.455227 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:01:40.455232 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:01:40.455236 | orchestrator | 2026-03-19 03:01:40.455241 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-19 03:01:40.455245 | orchestrator | Thursday 19 March 2026 03:00:05 +0000 (0:00:00.808) 0:07:02.479 ******** 2026-03-19 03:01:40.455250 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:01:40.455255 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:01:40.455260 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:01:40.455264 | orchestrator | 2026-03-19 03:01:40.455269 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-19 03:01:40.455275 | orchestrator | Thursday 19 March 2026 03:00:30 +0000 (0:00:24.641) 0:07:27.121 ******** 2026-03-19 03:01:40.455279 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:01:40.455284 | orchestrator | 2026-03-19 03:01:40.455288 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-19 03:01:40.455293 | orchestrator | Thursday 19 March 2026 03:00:30 +0000 (0:00:00.129) 0:07:27.250 ******** 2026-03-19 03:01:40.455298 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:01:40.455302 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.455307 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:01:40.455311 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.455316 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.455321 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-19 03:01:40.455327 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 03:01:40.455333 | orchestrator | 2026-03-19 03:01:40.455337 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-19 03:01:40.455350 | orchestrator | Thursday 19 March 2026 03:00:53 +0000 (0:00:22.703) 0:07:49.953 ******** 2026-03-19 03:01:40.455355 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.455360 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:01:40.455364 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:01:40.455369 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.455373 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.455378 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:01:40.455405 | orchestrator | 2026-03-19 03:01:40.455410 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-19 03:01:40.455414 | orchestrator | Thursday 19 March 2026 03:01:01 +0000 (0:00:07.967) 0:07:57.921 ******** 2026-03-19 03:01:40.455419 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:01:40.455423 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:01:40.455428 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.455432 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.455438 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.455455 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-19 03:01:40.455460 | orchestrator | 2026-03-19 03:01:40.455465 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-19 03:01:40.455469 | orchestrator | Thursday 19 March 2026 03:01:04 +0000 (0:00:03.410) 0:08:01.331 ******** 2026-03-19 03:01:40.455474 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 03:01:40.455478 | orchestrator | 2026-03-19 03:01:40.455483 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-19 03:01:40.455487 | orchestrator | Thursday 19 March 2026 03:01:18 +0000 (0:00:14.005) 0:08:15.337 ******** 2026-03-19 03:01:40.455492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 03:01:40.455496 | orchestrator | 2026-03-19 03:01:40.455501 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-19 03:01:40.455505 | orchestrator | Thursday 19 March 2026 03:01:20 +0000 (0:00:01.603) 0:08:16.941 ******** 2026-03-19 03:01:40.455510 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:01:40.455515 | orchestrator | 2026-03-19 03:01:40.455519 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-19 03:01:40.455524 | orchestrator | Thursday 19 March 2026 03:01:21 +0000 (0:00:01.692) 0:08:18.633 ******** 2026-03-19 03:01:40.455528 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 03:01:40.455534 | orchestrator | 2026-03-19 03:01:40.455539 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-19 03:01:40.455545 | orchestrator | Thursday 19 March 2026 03:01:34 +0000 (0:00:12.832) 0:08:31.466 ******** 2026-03-19 03:01:40.455550 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:01:40.455556 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:01:40.455562 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:01:40.455567 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:40.455572 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:40.455577 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:40.455582 | orchestrator | 2026-03-19 03:01:40.455588 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-19 03:01:40.455593 | orchestrator | 2026-03-19 03:01:40.455598 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-19 03:01:40.455615 | orchestrator | Thursday 19 March 2026 03:01:36 +0000 (0:00:01.986) 0:08:33.453 ******** 2026-03-19 03:01:40.455621 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:01:40.455626 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:01:40.455631 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:01:40.455635 | orchestrator | 2026-03-19 03:01:40.455640 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-19 03:01:40.455644 | orchestrator | 2026-03-19 03:01:40.455649 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-19 03:01:40.455654 | orchestrator | Thursday 19 March 2026 03:01:37 +0000 (0:00:00.926) 0:08:34.379 ******** 2026-03-19 03:01:40.455658 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.455663 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.455667 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.455672 | orchestrator | 2026-03-19 03:01:40.455676 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-19 03:01:40.455681 | orchestrator | 2026-03-19 03:01:40.455685 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-19 03:01:40.455697 | orchestrator | Thursday 19 March 2026 03:01:38 +0000 (0:00:00.754) 0:08:35.133 ******** 2026-03-19 03:01:40.455701 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-19 03:01:40.455708 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-19 03:01:40.455716 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455724 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-19 03:01:40.455733 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-19 03:01:40.455741 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455749 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:01:40.455758 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-19 03:01:40.455765 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-19 03:01:40.455769 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455774 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-19 03:01:40.455778 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-19 03:01:40.455783 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455787 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:01:40.455792 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-19 03:01:40.455796 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-19 03:01:40.455801 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455805 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-19 03:01:40.455810 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-19 03:01:40.455814 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455819 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:01:40.455823 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-19 03:01:40.455828 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-19 03:01:40.455832 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455837 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-19 03:01:40.455841 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-19 03:01:40.455847 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455855 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.455863 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-19 03:01:40.455875 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-19 03:01:40.455883 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455890 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-19 03:01:40.455897 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-19 03:01:40.455906 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455913 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.455921 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-19 03:01:40.455928 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-19 03:01:40.455937 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-19 03:01:40.455944 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-19 03:01:40.455951 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-19 03:01:40.455959 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-19 03:01:40.455964 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.456007 | orchestrator | 2026-03-19 03:01:40.456013 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-19 03:01:40.456023 | orchestrator | 2026-03-19 03:01:40.456028 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-19 03:01:40.456032 | orchestrator | Thursday 19 March 2026 03:01:39 +0000 (0:00:01.425) 0:08:36.559 ******** 2026-03-19 03:01:40.456037 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-19 03:01:40.456042 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-19 03:01:40.456046 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:40.456051 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-19 03:01:40.456055 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-19 03:01:40.456059 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:40.456064 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-19 03:01:40.456069 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-19 03:01:40.456073 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:40.456078 | orchestrator | 2026-03-19 03:01:40.456087 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-19 03:01:42.276644 | orchestrator | 2026-03-19 03:01:42.276751 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-19 03:01:42.276764 | orchestrator | Thursday 19 March 2026 03:01:40 +0000 (0:00:00.628) 0:08:37.188 ******** 2026-03-19 03:01:42.276770 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:42.276776 | orchestrator | 2026-03-19 03:01:42.276781 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-19 03:01:42.276786 | orchestrator | 2026-03-19 03:01:42.276791 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-19 03:01:42.276797 | orchestrator | Thursday 19 March 2026 03:01:41 +0000 (0:00:00.913) 0:08:38.101 ******** 2026-03-19 03:01:42.276802 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:42.276807 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:42.276812 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:42.276817 | orchestrator | 2026-03-19 03:01:42.276821 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:01:42.276858 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:01:42.276866 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-19 03:01:42.276871 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-19 03:01:42.276876 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-19 03:01:42.276881 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-19 03:01:42.276886 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 03:01:42.276891 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 03:01:42.276895 | orchestrator | 2026-03-19 03:01:42.276900 | orchestrator | 2026-03-19 03:01:42.276905 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:01:42.276910 | orchestrator | Thursday 19 March 2026 03:01:41 +0000 (0:00:00.453) 0:08:38.555 ******** 2026-03-19 03:01:42.276915 | orchestrator | =============================================================================== 2026-03-19 03:01:42.276920 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.23s 2026-03-19 03:01:42.276925 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.36s 2026-03-19 03:01:42.276956 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.64s 2026-03-19 03:01:42.276961 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.78s 2026-03-19 03:01:42.277006 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.70s 2026-03-19 03:01:42.277013 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.60s 2026-03-19 03:01:42.277031 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.20s 2026-03-19 03:01:42.277036 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.38s 2026-03-19 03:01:42.277041 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.24s 2026-03-19 03:01:42.277046 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.17s 2026-03-19 03:01:42.277051 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.01s 2026-03-19 03:01:42.277055 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.52s 2026-03-19 03:01:42.277060 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.30s 2026-03-19 03:01:42.277065 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.13s 2026-03-19 03:01:42.277070 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.83s 2026-03-19 03:01:42.277074 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.61s 2026-03-19 03:01:42.277079 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.21s 2026-03-19 03:01:42.277084 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.44s 2026-03-19 03:01:42.277089 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.26s 2026-03-19 03:01:42.277093 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.97s 2026-03-19 03:01:44.814271 | orchestrator | 2026-03-19 03:01:44 | INFO  | Task ef3db1da-081a-4643-9ee2-c855bb01ca1b (horizon) was prepared for execution. 2026-03-19 03:01:44.814349 | orchestrator | 2026-03-19 03:01:44 | INFO  | It takes a moment until task ef3db1da-081a-4643-9ee2-c855bb01ca1b (horizon) has been started and output is visible here. 2026-03-19 03:01:52.480259 | orchestrator | 2026-03-19 03:01:52.480362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:01:52.480373 | orchestrator | 2026-03-19 03:01:52.480379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:01:52.480386 | orchestrator | Thursday 19 March 2026 03:01:49 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-19 03:01:52.480392 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:52.480399 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:52.480406 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:52.480413 | orchestrator | 2026-03-19 03:01:52.480418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:01:52.480424 | orchestrator | Thursday 19 March 2026 03:01:49 +0000 (0:00:00.297) 0:00:00.568 ******** 2026-03-19 03:01:52.480431 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-19 03:01:52.480439 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-19 03:01:52.480445 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-19 03:01:52.480452 | orchestrator | 2026-03-19 03:01:52.480458 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-19 03:01:52.480464 | orchestrator | 2026-03-19 03:01:52.480470 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 03:01:52.480474 | orchestrator | Thursday 19 March 2026 03:01:50 +0000 (0:00:00.445) 0:00:01.013 ******** 2026-03-19 03:01:52.480479 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:01:52.480483 | orchestrator | 2026-03-19 03:01:52.480488 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-19 03:01:52.480512 | orchestrator | Thursday 19 March 2026 03:01:50 +0000 (0:00:00.575) 0:00:01.588 ******** 2026-03-19 03:01:52.480538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:01:52.480562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:01:52.480580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:01:52.480586 | orchestrator | 2026-03-19 03:01:52.480592 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-19 03:01:52.480598 | orchestrator | Thursday 19 March 2026 03:01:51 +0000 (0:00:01.202) 0:00:02.791 ******** 2026-03-19 03:01:52.480603 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:52.480609 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:52.480616 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:52.480621 | orchestrator | 2026-03-19 03:01:52.480628 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 03:01:52.480633 | orchestrator | Thursday 19 March 2026 03:01:52 +0000 (0:00:00.458) 0:00:03.250 ******** 2026-03-19 03:01:52.480644 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 03:01:58.471713 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 03:01:58.471832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 03:01:58.471842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 03:01:58.471849 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 03:01:58.471855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 03:01:58.471861 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-19 03:01:58.471894 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 03:01:58.471901 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 03:01:58.471907 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 03:01:58.471913 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 03:01:58.471919 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 03:01:58.471925 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 03:01:58.471931 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 03:01:58.471938 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-19 03:01:58.471944 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 03:01:58.471950 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-19 03:01:58.472001 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-19 03:01:58.472008 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-19 03:01:58.472013 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-19 03:01:58.472016 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-19 03:01:58.472020 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-19 03:01:58.472024 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-19 03:01:58.472028 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-19 03:01:58.472034 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-19 03:01:58.472039 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-19 03:01:58.472043 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-19 03:01:58.472061 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-19 03:01:58.472065 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-19 03:01:58.472071 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-19 03:01:58.472077 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-19 03:01:58.472083 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-19 03:01:58.472089 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-19 03:01:58.472096 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-19 03:01:58.472102 | orchestrator | 2026-03-19 03:01:58.472118 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472134 | orchestrator | Thursday 19 March 2026 03:01:53 +0000 (0:00:00.716) 0:00:03.966 ******** 2026-03-19 03:01:58.472148 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472157 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472162 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472166 | orchestrator | 2026-03-19 03:01:58.472170 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472174 | orchestrator | Thursday 19 March 2026 03:01:53 +0000 (0:00:00.340) 0:00:04.306 ******** 2026-03-19 03:01:58.472178 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472182 | orchestrator | 2026-03-19 03:01:58.472200 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472204 | orchestrator | Thursday 19 March 2026 03:01:53 +0000 (0:00:00.288) 0:00:04.594 ******** 2026-03-19 03:01:58.472208 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472212 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472216 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472219 | orchestrator | 2026-03-19 03:01:58.472223 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472227 | orchestrator | Thursday 19 March 2026 03:01:53 +0000 (0:00:00.299) 0:00:04.894 ******** 2026-03-19 03:01:58.472231 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472238 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472244 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472250 | orchestrator | 2026-03-19 03:01:58.472254 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472258 | orchestrator | Thursday 19 March 2026 03:01:54 +0000 (0:00:00.325) 0:00:05.219 ******** 2026-03-19 03:01:58.472263 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472269 | orchestrator | 2026-03-19 03:01:58.472276 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472283 | orchestrator | Thursday 19 March 2026 03:01:54 +0000 (0:00:00.134) 0:00:05.353 ******** 2026-03-19 03:01:58.472291 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472297 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472304 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472311 | orchestrator | 2026-03-19 03:01:58.472316 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472320 | orchestrator | Thursday 19 March 2026 03:01:54 +0000 (0:00:00.288) 0:00:05.641 ******** 2026-03-19 03:01:58.472325 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472329 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472333 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472338 | orchestrator | 2026-03-19 03:01:58.472342 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472347 | orchestrator | Thursday 19 March 2026 03:01:55 +0000 (0:00:00.511) 0:00:06.153 ******** 2026-03-19 03:01:58.472351 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472355 | orchestrator | 2026-03-19 03:01:58.472360 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472364 | orchestrator | Thursday 19 March 2026 03:01:55 +0000 (0:00:00.125) 0:00:06.278 ******** 2026-03-19 03:01:58.472368 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472373 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472377 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472381 | orchestrator | 2026-03-19 03:01:58.472386 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472390 | orchestrator | Thursday 19 March 2026 03:01:55 +0000 (0:00:00.292) 0:00:06.570 ******** 2026-03-19 03:01:58.472394 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472399 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472403 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472407 | orchestrator | 2026-03-19 03:01:58.472412 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472416 | orchestrator | Thursday 19 March 2026 03:01:55 +0000 (0:00:00.319) 0:00:06.890 ******** 2026-03-19 03:01:58.472425 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472429 | orchestrator | 2026-03-19 03:01:58.472434 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472438 | orchestrator | Thursday 19 March 2026 03:01:56 +0000 (0:00:00.128) 0:00:07.018 ******** 2026-03-19 03:01:58.472442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472447 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472451 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472455 | orchestrator | 2026-03-19 03:01:58.472459 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472468 | orchestrator | Thursday 19 March 2026 03:01:56 +0000 (0:00:00.519) 0:00:07.538 ******** 2026-03-19 03:01:58.472473 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472477 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472481 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472486 | orchestrator | 2026-03-19 03:01:58.472490 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472494 | orchestrator | Thursday 19 March 2026 03:01:56 +0000 (0:00:00.319) 0:00:07.858 ******** 2026-03-19 03:01:58.472499 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472503 | orchestrator | 2026-03-19 03:01:58.472507 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472513 | orchestrator | Thursday 19 March 2026 03:01:57 +0000 (0:00:00.131) 0:00:07.990 ******** 2026-03-19 03:01:58.472520 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472525 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472529 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472534 | orchestrator | 2026-03-19 03:01:58.472541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472547 | orchestrator | Thursday 19 March 2026 03:01:57 +0000 (0:00:00.330) 0:00:08.320 ******** 2026-03-19 03:01:58.472553 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:01:58.472560 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:01:58.472565 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:01:58.472572 | orchestrator | 2026-03-19 03:01:58.472578 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:01:58.472584 | orchestrator | Thursday 19 March 2026 03:01:57 +0000 (0:00:00.342) 0:00:08.663 ******** 2026-03-19 03:01:58.472591 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472597 | orchestrator | 2026-03-19 03:01:58.472603 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:01:58.472610 | orchestrator | Thursday 19 March 2026 03:01:58 +0000 (0:00:00.372) 0:00:09.036 ******** 2026-03-19 03:01:58.472616 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:01:58.472621 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:01:58.472625 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:01:58.472630 | orchestrator | 2026-03-19 03:01:58.472636 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:01:58.472648 | orchestrator | Thursday 19 March 2026 03:01:58 +0000 (0:00:00.333) 0:00:09.369 ******** 2026-03-19 03:02:12.712045 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:02:12.712192 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:02:12.712214 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:02:12.712232 | orchestrator | 2026-03-19 03:02:12.712250 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:02:12.712268 | orchestrator | Thursday 19 March 2026 03:01:58 +0000 (0:00:00.354) 0:00:09.724 ******** 2026-03-19 03:02:12.712279 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712289 | orchestrator | 2026-03-19 03:02:12.712298 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:02:12.712306 | orchestrator | Thursday 19 March 2026 03:01:58 +0000 (0:00:00.145) 0:00:09.869 ******** 2026-03-19 03:02:12.712316 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712325 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.712360 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.712369 | orchestrator | 2026-03-19 03:02:12.712379 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:02:12.712388 | orchestrator | Thursday 19 March 2026 03:01:59 +0000 (0:00:00.356) 0:00:10.226 ******** 2026-03-19 03:02:12.712397 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:02:12.712406 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:02:12.712414 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:02:12.712423 | orchestrator | 2026-03-19 03:02:12.712432 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:02:12.712441 | orchestrator | Thursday 19 March 2026 03:01:59 +0000 (0:00:00.561) 0:00:10.788 ******** 2026-03-19 03:02:12.712449 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712458 | orchestrator | 2026-03-19 03:02:12.712467 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:02:12.712475 | orchestrator | Thursday 19 March 2026 03:02:00 +0000 (0:00:00.147) 0:00:10.935 ******** 2026-03-19 03:02:12.712484 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712492 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.712501 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.712509 | orchestrator | 2026-03-19 03:02:12.712518 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:02:12.712527 | orchestrator | Thursday 19 March 2026 03:02:00 +0000 (0:00:00.336) 0:00:11.271 ******** 2026-03-19 03:02:12.712535 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:02:12.712544 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:02:12.712553 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:02:12.712561 | orchestrator | 2026-03-19 03:02:12.712570 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:02:12.712579 | orchestrator | Thursday 19 March 2026 03:02:00 +0000 (0:00:00.339) 0:00:11.611 ******** 2026-03-19 03:02:12.712587 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712596 | orchestrator | 2026-03-19 03:02:12.712604 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:02:12.712613 | orchestrator | Thursday 19 March 2026 03:02:00 +0000 (0:00:00.154) 0:00:11.766 ******** 2026-03-19 03:02:12.712621 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712630 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.712639 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.712647 | orchestrator | 2026-03-19 03:02:12.712656 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-19 03:02:12.712664 | orchestrator | Thursday 19 March 2026 03:02:01 +0000 (0:00:00.494) 0:00:12.260 ******** 2026-03-19 03:02:12.712673 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:02:12.712682 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:02:12.712690 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:02:12.712699 | orchestrator | 2026-03-19 03:02:12.712707 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-19 03:02:12.712716 | orchestrator | Thursday 19 March 2026 03:02:01 +0000 (0:00:00.322) 0:00:12.583 ******** 2026-03-19 03:02:12.712725 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712733 | orchestrator | 2026-03-19 03:02:12.712742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-19 03:02:12.712760 | orchestrator | Thursday 19 March 2026 03:02:01 +0000 (0:00:00.138) 0:00:12.721 ******** 2026-03-19 03:02:12.712768 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.712777 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.712786 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.712794 | orchestrator | 2026-03-19 03:02:12.712803 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-19 03:02:12.712811 | orchestrator | Thursday 19 March 2026 03:02:02 +0000 (0:00:00.321) 0:00:13.043 ******** 2026-03-19 03:02:12.712820 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:02:12.712828 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:02:12.712837 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:02:12.712852 | orchestrator | 2026-03-19 03:02:12.712861 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-19 03:02:12.712870 | orchestrator | Thursday 19 March 2026 03:02:03 +0000 (0:00:01.835) 0:00:14.878 ******** 2026-03-19 03:02:12.712878 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 03:02:12.712888 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 03:02:12.712896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-19 03:02:12.712905 | orchestrator | 2026-03-19 03:02:12.712913 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-19 03:02:12.712922 | orchestrator | Thursday 19 March 2026 03:02:05 +0000 (0:00:01.910) 0:00:16.789 ******** 2026-03-19 03:02:12.712931 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 03:02:12.713031 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 03:02:12.713043 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-19 03:02:12.713051 | orchestrator | 2026-03-19 03:02:12.713060 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-19 03:02:12.713086 | orchestrator | Thursday 19 March 2026 03:02:07 +0000 (0:00:01.783) 0:00:18.573 ******** 2026-03-19 03:02:12.713095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 03:02:12.713104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 03:02:12.713112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-19 03:02:12.713121 | orchestrator | 2026-03-19 03:02:12.713130 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-19 03:02:12.713139 | orchestrator | Thursday 19 March 2026 03:02:09 +0000 (0:00:01.705) 0:00:20.279 ******** 2026-03-19 03:02:12.713147 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.713156 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.713164 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.713173 | orchestrator | 2026-03-19 03:02:12.713181 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-19 03:02:12.713190 | orchestrator | Thursday 19 March 2026 03:02:09 +0000 (0:00:00.499) 0:00:20.778 ******** 2026-03-19 03:02:12.713198 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:12.713207 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:12.713216 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:12.713224 | orchestrator | 2026-03-19 03:02:12.713233 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 03:02:12.713242 | orchestrator | Thursday 19 March 2026 03:02:10 +0000 (0:00:00.300) 0:00:21.079 ******** 2026-03-19 03:02:12.713250 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:02:12.713259 | orchestrator | 2026-03-19 03:02:12.713268 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-19 03:02:12.713276 | orchestrator | Thursday 19 March 2026 03:02:10 +0000 (0:00:00.647) 0:00:21.726 ******** 2026-03-19 03:02:12.713297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:02:12.713327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:02:13.350136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:02:13.350264 | orchestrator | 2026-03-19 03:02:13.350276 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-19 03:02:13.350284 | orchestrator | Thursday 19 March 2026 03:02:12 +0000 (0:00:01.877) 0:00:23.603 ******** 2026-03-19 03:02:13.350308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:13.350321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:13.350335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:13.350342 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:13.350354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:15.899407 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:15.899498 | orchestrator | 2026-03-19 03:02:15.899508 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-19 03:02:15.899520 | orchestrator | Thursday 19 March 2026 03:02:13 +0000 (0:00:00.644) 0:00:24.248 ******** 2026-03-19 03:02:15.899557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:15.899573 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:02:15.899601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:15.899643 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:02:15.899687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 03:02:15.899695 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:02:15.899701 | orchestrator | 2026-03-19 03:02:15.899707 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-19 03:02:15.899713 | orchestrator | Thursday 19 March 2026 03:02:14 +0000 (0:00:00.834) 0:00:25.082 ******** 2026-03-19 03:02:15.899737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:03:03.314991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:03:03.315128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 03:03:03.315136 | orchestrator | 2026-03-19 03:03:03.315142 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 03:03:03.315147 | orchestrator | Thursday 19 March 2026 03:02:15 +0000 (0:00:01.710) 0:00:26.793 ******** 2026-03-19 03:03:03.315151 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:03:03.315155 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:03:03.315159 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:03:03.315163 | orchestrator | 2026-03-19 03:03:03.315167 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-19 03:03:03.315170 | orchestrator | Thursday 19 March 2026 03:02:16 +0000 (0:00:00.355) 0:00:27.149 ******** 2026-03-19 03:03:03.315175 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:03:03.315179 | orchestrator | 2026-03-19 03:03:03.315182 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-19 03:03:03.315186 | orchestrator | Thursday 19 March 2026 03:02:16 +0000 (0:00:00.538) 0:00:27.687 ******** 2026-03-19 03:03:03.315190 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:03:03.315194 | orchestrator | 2026-03-19 03:03:03.315198 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-19 03:03:03.315202 | orchestrator | Thursday 19 March 2026 03:02:19 +0000 (0:00:02.437) 0:00:30.125 ******** 2026-03-19 03:03:03.315205 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:03:03.315209 | orchestrator | 2026-03-19 03:03:03.315213 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-19 03:03:03.315217 | orchestrator | Thursday 19 March 2026 03:02:22 +0000 (0:00:02.863) 0:00:32.989 ******** 2026-03-19 03:03:03.315227 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:03:03.315231 | orchestrator | 2026-03-19 03:03:03.315234 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 03:03:03.315238 | orchestrator | Thursday 19 March 2026 03:02:39 +0000 (0:00:17.825) 0:00:50.814 ******** 2026-03-19 03:03:03.315242 | orchestrator | 2026-03-19 03:03:03.315245 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 03:03:03.315249 | orchestrator | Thursday 19 March 2026 03:02:39 +0000 (0:00:00.075) 0:00:50.889 ******** 2026-03-19 03:03:03.315253 | orchestrator | 2026-03-19 03:03:03.315257 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-19 03:03:03.315260 | orchestrator | Thursday 19 March 2026 03:02:40 +0000 (0:00:00.081) 0:00:50.971 ******** 2026-03-19 03:03:03.315264 | orchestrator | 2026-03-19 03:03:03.315268 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-19 03:03:03.315272 | orchestrator | Thursday 19 March 2026 03:02:40 +0000 (0:00:00.072) 0:00:51.043 ******** 2026-03-19 03:03:03.315275 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:03:03.315279 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:03:03.315283 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:03:03.315287 | orchestrator | 2026-03-19 03:03:03.315290 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:03:03.315295 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 03:03:03.315301 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-19 03:03:03.315304 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-19 03:03:03.315308 | orchestrator | 2026-03-19 03:03:03.315312 | orchestrator | 2026-03-19 03:03:03.315316 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:03:03.315319 | orchestrator | Thursday 19 March 2026 03:03:03 +0000 (0:00:23.150) 0:01:14.194 ******** 2026-03-19 03:03:03.315323 | orchestrator | =============================================================================== 2026-03-19 03:03:03.315327 | orchestrator | horizon : Restart horizon container ------------------------------------ 23.15s 2026-03-19 03:03:03.315331 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.83s 2026-03-19 03:03:03.315334 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.86s 2026-03-19 03:03:03.315338 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.44s 2026-03-19 03:03:03.315345 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.91s 2026-03-19 03:03:03.315349 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.88s 2026-03-19 03:03:03.315353 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.84s 2026-03-19 03:03:03.315357 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.78s 2026-03-19 03:03:03.315360 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.71s 2026-03-19 03:03:03.315364 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.71s 2026-03-19 03:03:03.315368 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-03-19 03:03:03.315372 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-03-19 03:03:03.315375 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-03-19 03:03:03.315382 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-19 03:03:03.697168 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-03-19 03:03:03.697257 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-19 03:03:03.697299 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-03-19 03:03:03.697316 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-03-19 03:03:03.697329 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-03-19 03:03:03.697339 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-03-19 03:03:06.080326 | orchestrator | 2026-03-19 03:03:06 | INFO  | Task 4e85cf29-71fe-432a-b837-979cf3f9d97c (skyline) was prepared for execution. 2026-03-19 03:03:06.080415 | orchestrator | 2026-03-19 03:03:06 | INFO  | It takes a moment until task 4e85cf29-71fe-432a-b837-979cf3f9d97c (skyline) has been started and output is visible here. 2026-03-19 03:03:39.791669 | orchestrator | 2026-03-19 03:03:39.791777 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:03:39.791787 | orchestrator | 2026-03-19 03:03:39.791792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:03:39.791797 | orchestrator | Thursday 19 March 2026 03:03:10 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-19 03:03:39.791801 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:03:39.791807 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:03:39.791811 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:03:39.791816 | orchestrator | 2026-03-19 03:03:39.791883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:03:39.791895 | orchestrator | Thursday 19 March 2026 03:03:10 +0000 (0:00:00.318) 0:00:00.586 ******** 2026-03-19 03:03:39.791903 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-19 03:03:39.791912 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-19 03:03:39.791919 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-19 03:03:39.791927 | orchestrator | 2026-03-19 03:03:39.791934 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-19 03:03:39.791941 | orchestrator | 2026-03-19 03:03:39.791948 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-19 03:03:39.791955 | orchestrator | Thursday 19 March 2026 03:03:11 +0000 (0:00:00.446) 0:00:01.033 ******** 2026-03-19 03:03:39.791963 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:03:39.791971 | orchestrator | 2026-03-19 03:03:39.791978 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-19 03:03:39.791985 | orchestrator | Thursday 19 March 2026 03:03:11 +0000 (0:00:00.535) 0:00:01.568 ******** 2026-03-19 03:03:39.791992 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-19 03:03:39.792000 | orchestrator | 2026-03-19 03:03:39.792008 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-19 03:03:39.792015 | orchestrator | Thursday 19 March 2026 03:03:15 +0000 (0:00:03.618) 0:00:05.187 ******** 2026-03-19 03:03:39.792023 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-19 03:03:39.792032 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-19 03:03:39.792036 | orchestrator | 2026-03-19 03:03:39.792041 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-19 03:03:39.792045 | orchestrator | Thursday 19 March 2026 03:03:22 +0000 (0:00:07.275) 0:00:12.463 ******** 2026-03-19 03:03:39.792049 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:03:39.792054 | orchestrator | 2026-03-19 03:03:39.792058 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-19 03:03:39.792063 | orchestrator | Thursday 19 March 2026 03:03:26 +0000 (0:00:03.689) 0:00:16.153 ******** 2026-03-19 03:03:39.792067 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:03:39.792072 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-19 03:03:39.792097 | orchestrator | 2026-03-19 03:03:39.792102 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-19 03:03:39.792106 | orchestrator | Thursday 19 March 2026 03:03:30 +0000 (0:00:04.354) 0:00:20.507 ******** 2026-03-19 03:03:39.792110 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:03:39.792114 | orchestrator | 2026-03-19 03:03:39.792118 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-19 03:03:39.792122 | orchestrator | Thursday 19 March 2026 03:03:33 +0000 (0:00:03.510) 0:00:24.018 ******** 2026-03-19 03:03:39.792139 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-19 03:03:39.792143 | orchestrator | 2026-03-19 03:03:39.792147 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-19 03:03:39.792151 | orchestrator | Thursday 19 March 2026 03:03:38 +0000 (0:00:04.394) 0:00:28.412 ******** 2026-03-19 03:03:39.792159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:39.792180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:39.792187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:39.792195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:39.792212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:39.792225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558070 | orchestrator | 2026-03-19 03:03:43.558155 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-19 03:03:43.558165 | orchestrator | Thursday 19 March 2026 03:03:39 +0000 (0:00:01.391) 0:00:29.804 ******** 2026-03-19 03:03:43.558170 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:03:43.558174 | orchestrator | 2026-03-19 03:03:43.558181 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-19 03:03:43.558187 | orchestrator | Thursday 19 March 2026 03:03:40 +0000 (0:00:00.720) 0:00:30.525 ******** 2026-03-19 03:03:43.558196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:43.558286 | orchestrator | 2026-03-19 03:03:43.558290 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-19 03:03:43.558294 | orchestrator | Thursday 19 March 2026 03:03:42 +0000 (0:00:02.448) 0:00:32.973 ******** 2026-03-19 03:03:43.558301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:43.558305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:43.558309 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:03:43.558317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.794799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.794963 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:03:44.794997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795014 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:03:44.795031 | orchestrator | 2026-03-19 03:03:44.795070 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-19 03:03:44.795080 | orchestrator | Thursday 19 March 2026 03:03:43 +0000 (0:00:00.602) 0:00:33.576 ******** 2026-03-19 03:03:44.795088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795145 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:03:44.795157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795173 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:03:44.795180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-19 03:03:44.795198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-19 03:03:53.344712 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:03:53.344830 | orchestrator | 2026-03-19 03:03:53.344844 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-19 03:03:53.344906 | orchestrator | Thursday 19 March 2026 03:03:44 +0000 (0:00:01.229) 0:00:34.805 ******** 2026-03-19 03:03:53.344936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.344948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.344957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.344990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.345016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.345030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.345038 | orchestrator | 2026-03-19 03:03:53.345047 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-19 03:03:53.345055 | orchestrator | Thursday 19 March 2026 03:03:47 +0000 (0:00:02.554) 0:00:37.360 ******** 2026-03-19 03:03:53.345063 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-19 03:03:53.345071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-19 03:03:53.345079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-19 03:03:53.345086 | orchestrator | 2026-03-19 03:03:53.345094 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-19 03:03:53.345102 | orchestrator | Thursday 19 March 2026 03:03:48 +0000 (0:00:01.616) 0:00:38.977 ******** 2026-03-19 03:03:53.345110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-19 03:03:53.345124 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-19 03:03:53.345132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-19 03:03:53.345139 | orchestrator | 2026-03-19 03:03:53.345147 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-19 03:03:53.345155 | orchestrator | Thursday 19 March 2026 03:03:50 +0000 (0:00:02.030) 0:00:41.007 ******** 2026-03-19 03:03:53.345163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:53.345179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515571 | orchestrator | 2026-03-19 03:03:55.515581 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-19 03:03:55.515591 | orchestrator | Thursday 19 March 2026 03:03:53 +0000 (0:00:02.359) 0:00:43.366 ******** 2026-03-19 03:03:55.515599 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:03:55.515609 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:03:55.515617 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:03:55.515624 | orchestrator | 2026-03-19 03:03:55.515647 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-19 03:03:55.515656 | orchestrator | Thursday 19 March 2026 03:03:53 +0000 (0:00:00.304) 0:00:43.671 ******** 2026-03-19 03:03:55.515669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:03:55.515723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:04:24.265202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-19 03:04:24.265340 | orchestrator | 2026-03-19 03:04:24.265353 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-19 03:04:24.265360 | orchestrator | Thursday 19 March 2026 03:03:55 +0000 (0:00:01.862) 0:00:45.533 ******** 2026-03-19 03:04:24.265365 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:04:24.265371 | orchestrator | 2026-03-19 03:04:24.265376 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-19 03:04:24.265381 | orchestrator | Thursday 19 March 2026 03:03:58 +0000 (0:00:02.589) 0:00:48.122 ******** 2026-03-19 03:04:24.265386 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:04:24.265391 | orchestrator | 2026-03-19 03:04:24.265396 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-19 03:04:24.265401 | orchestrator | Thursday 19 March 2026 03:04:00 +0000 (0:00:02.542) 0:00:50.665 ******** 2026-03-19 03:04:24.265406 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:04:24.265410 | orchestrator | 2026-03-19 03:04:24.265416 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-19 03:04:24.265421 | orchestrator | Thursday 19 March 2026 03:04:08 +0000 (0:00:07.841) 0:00:58.507 ******** 2026-03-19 03:04:24.265426 | orchestrator | 2026-03-19 03:04:24.265431 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-19 03:04:24.265435 | orchestrator | Thursday 19 March 2026 03:04:08 +0000 (0:00:00.066) 0:00:58.573 ******** 2026-03-19 03:04:24.265440 | orchestrator | 2026-03-19 03:04:24.265445 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-19 03:04:24.265450 | orchestrator | Thursday 19 March 2026 03:04:08 +0000 (0:00:00.066) 0:00:58.640 ******** 2026-03-19 03:04:24.265455 | orchestrator | 2026-03-19 03:04:24.265460 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-19 03:04:24.265465 | orchestrator | Thursday 19 March 2026 03:04:08 +0000 (0:00:00.068) 0:00:58.709 ******** 2026-03-19 03:04:24.265469 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:04:24.265474 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:04:24.265479 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:04:24.265484 | orchestrator | 2026-03-19 03:04:24.265488 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-19 03:04:24.265493 | orchestrator | Thursday 19 March 2026 03:04:14 +0000 (0:00:06.044) 0:01:04.754 ******** 2026-03-19 03:04:24.265498 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:04:24.265503 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:04:24.265508 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:04:24.265512 | orchestrator | 2026-03-19 03:04:24.265517 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:04:24.265523 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 03:04:24.265530 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 03:04:24.265534 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 03:04:24.265539 | orchestrator | 2026-03-19 03:04:24.265544 | orchestrator | 2026-03-19 03:04:24.265549 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:04:24.265554 | orchestrator | Thursday 19 March 2026 03:04:23 +0000 (0:00:09.215) 0:01:13.969 ******** 2026-03-19 03:04:24.265564 | orchestrator | =============================================================================== 2026-03-19 03:04:24.265568 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.22s 2026-03-19 03:04:24.265573 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.84s 2026-03-19 03:04:24.265578 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 7.28s 2026-03-19 03:04:24.265597 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.04s 2026-03-19 03:04:24.265602 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 4.39s 2026-03-19 03:04:24.265607 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.35s 2026-03-19 03:04:24.265611 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.69s 2026-03-19 03:04:24.265616 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.62s 2026-03-19 03:04:24.265632 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.51s 2026-03-19 03:04:24.265638 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.59s 2026-03-19 03:04:24.265642 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.55s 2026-03-19 03:04:24.265647 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.54s 2026-03-19 03:04:24.265652 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.45s 2026-03-19 03:04:24.265657 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.36s 2026-03-19 03:04:24.265662 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.03s 2026-03-19 03:04:24.265666 | orchestrator | skyline : Check skyline container --------------------------------------- 1.86s 2026-03-19 03:04:24.265671 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.62s 2026-03-19 03:04:24.265676 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.39s 2026-03-19 03:04:24.265681 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.23s 2026-03-19 03:04:24.265686 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.72s 2026-03-19 03:04:26.598145 | orchestrator | 2026-03-19 03:04:26 | INFO  | Task 2cbed393-14f8-47c6-a4cd-1b8d6f620336 (glance) was prepared for execution. 2026-03-19 03:04:26.598235 | orchestrator | 2026-03-19 03:04:26 | INFO  | It takes a moment until task 2cbed393-14f8-47c6-a4cd-1b8d6f620336 (glance) has been started and output is visible here. 2026-03-19 03:05:02.677508 | orchestrator | 2026-03-19 03:05:02.677609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:05:02.677619 | orchestrator | 2026-03-19 03:05:02.677627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:05:02.677635 | orchestrator | Thursday 19 March 2026 03:04:30 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-03-19 03:05:02.677642 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:05:02.677650 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:05:02.677657 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:05:02.677663 | orchestrator | 2026-03-19 03:05:02.677671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:05:02.677677 | orchestrator | Thursday 19 March 2026 03:04:31 +0000 (0:00:00.309) 0:00:00.582 ******** 2026-03-19 03:05:02.677684 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-19 03:05:02.677692 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-19 03:05:02.677698 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-19 03:05:02.677705 | orchestrator | 2026-03-19 03:05:02.677712 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-19 03:05:02.677719 | orchestrator | 2026-03-19 03:05:02.677727 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 03:05:02.677772 | orchestrator | Thursday 19 March 2026 03:04:31 +0000 (0:00:00.407) 0:00:00.990 ******** 2026-03-19 03:05:02.677790 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:05:02.677893 | orchestrator | 2026-03-19 03:05:02.677905 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-19 03:05:02.677916 | orchestrator | Thursday 19 March 2026 03:04:32 +0000 (0:00:00.566) 0:00:01.556 ******** 2026-03-19 03:05:02.677928 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-19 03:05:02.677939 | orchestrator | 2026-03-19 03:05:02.677950 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-19 03:05:02.677962 | orchestrator | Thursday 19 March 2026 03:04:35 +0000 (0:00:03.654) 0:00:05.210 ******** 2026-03-19 03:05:02.677974 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-19 03:05:02.677986 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-19 03:05:02.677999 | orchestrator | 2026-03-19 03:05:02.678012 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-19 03:05:02.678072 | orchestrator | Thursday 19 March 2026 03:04:42 +0000 (0:00:07.031) 0:00:12.242 ******** 2026-03-19 03:05:02.678082 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:05:02.678091 | orchestrator | 2026-03-19 03:05:02.678099 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-19 03:05:02.678107 | orchestrator | Thursday 19 March 2026 03:04:46 +0000 (0:00:03.552) 0:00:15.794 ******** 2026-03-19 03:05:02.678115 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:05:02.678123 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-19 03:05:02.678131 | orchestrator | 2026-03-19 03:05:02.678140 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-19 03:05:02.678147 | orchestrator | Thursday 19 March 2026 03:04:50 +0000 (0:00:04.453) 0:00:20.247 ******** 2026-03-19 03:05:02.678155 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:05:02.678164 | orchestrator | 2026-03-19 03:05:02.678185 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-19 03:05:02.678193 | orchestrator | Thursday 19 March 2026 03:04:54 +0000 (0:00:03.529) 0:00:23.777 ******** 2026-03-19 03:05:02.678201 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-19 03:05:02.678209 | orchestrator | 2026-03-19 03:05:02.678217 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-19 03:05:02.678225 | orchestrator | Thursday 19 March 2026 03:04:58 +0000 (0:00:04.077) 0:00:27.854 ******** 2026-03-19 03:05:02.678260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:02.678282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:02.678295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:02.678304 | orchestrator | 2026-03-19 03:05:02.678313 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 03:05:02.678321 | orchestrator | Thursday 19 March 2026 03:05:01 +0000 (0:00:03.502) 0:00:31.357 ******** 2026-03-19 03:05:02.678335 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:05:02.678343 | orchestrator | 2026-03-19 03:05:02.678356 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-19 03:05:17.235111 | orchestrator | Thursday 19 March 2026 03:05:02 +0000 (0:00:00.730) 0:00:32.087 ******** 2026-03-19 03:05:17.235234 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:05:17.235252 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:05:17.235264 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:05:17.235275 | orchestrator | 2026-03-19 03:05:17.235287 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-19 03:05:17.235300 | orchestrator | Thursday 19 March 2026 03:05:06 +0000 (0:00:03.441) 0:00:35.529 ******** 2026-03-19 03:05:17.235314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235340 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235352 | orchestrator | 2026-03-19 03:05:17.235363 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-19 03:05:17.235374 | orchestrator | Thursday 19 March 2026 03:05:07 +0000 (0:00:01.574) 0:00:37.104 ******** 2026-03-19 03:05:17.235386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235409 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:05:17.235421 | orchestrator | 2026-03-19 03:05:17.235431 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-19 03:05:17.235443 | orchestrator | Thursday 19 March 2026 03:05:09 +0000 (0:00:01.355) 0:00:38.459 ******** 2026-03-19 03:05:17.235455 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:05:17.235467 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:05:17.235478 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:05:17.235489 | orchestrator | 2026-03-19 03:05:17.235501 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-19 03:05:17.235513 | orchestrator | Thursday 19 March 2026 03:05:09 +0000 (0:00:00.665) 0:00:39.125 ******** 2026-03-19 03:05:17.235523 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:17.235536 | orchestrator | 2026-03-19 03:05:17.235547 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-19 03:05:17.235559 | orchestrator | Thursday 19 March 2026 03:05:09 +0000 (0:00:00.136) 0:00:39.261 ******** 2026-03-19 03:05:17.235570 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:17.235582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:17.235591 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:17.235603 | orchestrator | 2026-03-19 03:05:17.235616 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 03:05:17.235630 | orchestrator | Thursday 19 March 2026 03:05:10 +0000 (0:00:00.292) 0:00:39.553 ******** 2026-03-19 03:05:17.235642 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:05:17.235655 | orchestrator | 2026-03-19 03:05:17.235668 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-19 03:05:17.235681 | orchestrator | Thursday 19 March 2026 03:05:10 +0000 (0:00:00.712) 0:00:40.266 ******** 2026-03-19 03:05:17.235720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:17.235814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:17.235834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:17.235854 | orchestrator | 2026-03-19 03:05:17.235865 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-19 03:05:17.235875 | orchestrator | Thursday 19 March 2026 03:05:14 +0000 (0:00:03.701) 0:00:43.967 ******** 2026-03-19 03:05:17.235895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:20.618457 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:20.618614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:20.618669 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:20.618684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:20.618696 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:20.618708 | orchestrator | 2026-03-19 03:05:20.618719 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-19 03:05:20.618732 | orchestrator | Thursday 19 March 2026 03:05:17 +0000 (0:00:02.677) 0:00:46.645 ******** 2026-03-19 03:05:20.618773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:20.618848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:20.618860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:20.618872 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:20.618895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 03:05:53.639246 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639359 | orchestrator | 2026-03-19 03:05:53.639371 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-19 03:05:53.639379 | orchestrator | Thursday 19 March 2026 03:05:20 +0000 (0:00:03.380) 0:00:50.026 ******** 2026-03-19 03:05:53.639406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639412 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639419 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639425 | orchestrator | 2026-03-19 03:05:53.639431 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-19 03:05:53.639438 | orchestrator | Thursday 19 March 2026 03:05:23 +0000 (0:00:03.174) 0:00:53.200 ******** 2026-03-19 03:05:53.639461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:53.639471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:53.639498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:05:53.639513 | orchestrator | 2026-03-19 03:05:53.639519 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-19 03:05:53.639526 | orchestrator | Thursday 19 March 2026 03:05:27 +0000 (0:00:03.881) 0:00:57.081 ******** 2026-03-19 03:05:53.639532 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:05:53.639538 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:05:53.639545 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:05:53.639551 | orchestrator | 2026-03-19 03:05:53.639557 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-19 03:05:53.639563 | orchestrator | Thursday 19 March 2026 03:05:33 +0000 (0:00:05.539) 0:01:02.621 ******** 2026-03-19 03:05:53.639569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639576 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639588 | orchestrator | 2026-03-19 03:05:53.639594 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-19 03:05:53.639600 | orchestrator | Thursday 19 March 2026 03:05:36 +0000 (0:00:03.226) 0:01:05.848 ******** 2026-03-19 03:05:53.639607 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639613 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639619 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639625 | orchestrator | 2026-03-19 03:05:53.639631 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-19 03:05:53.639638 | orchestrator | Thursday 19 March 2026 03:05:39 +0000 (0:00:03.093) 0:01:08.941 ******** 2026-03-19 03:05:53.639644 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639650 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639656 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639662 | orchestrator | 2026-03-19 03:05:53.639669 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-19 03:05:53.639675 | orchestrator | Thursday 19 March 2026 03:05:42 +0000 (0:00:03.203) 0:01:12.144 ******** 2026-03-19 03:05:53.639681 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639687 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639693 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639699 | orchestrator | 2026-03-19 03:05:53.639706 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-19 03:05:53.639712 | orchestrator | Thursday 19 March 2026 03:05:46 +0000 (0:00:03.352) 0:01:15.497 ******** 2026-03-19 03:05:53.639723 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639729 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639735 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639742 | orchestrator | 2026-03-19 03:05:53.639748 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-19 03:05:53.639778 | orchestrator | Thursday 19 March 2026 03:05:46 +0000 (0:00:00.527) 0:01:16.024 ******** 2026-03-19 03:05:53.639788 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 03:05:53.639800 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:05:53.639812 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 03:05:53.639823 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:05:53.639834 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-19 03:05:53.639844 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:05:53.639852 | orchestrator | 2026-03-19 03:05:53.639859 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-19 03:05:53.639867 | orchestrator | Thursday 19 March 2026 03:05:49 +0000 (0:00:03.117) 0:01:19.142 ******** 2026-03-19 03:05:53.639875 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:05:53.639882 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:05:53.639890 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:05:53.639897 | orchestrator | 2026-03-19 03:05:53.639904 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-19 03:05:53.639917 | orchestrator | Thursday 19 March 2026 03:05:53 +0000 (0:00:03.905) 0:01:23.048 ******** 2026-03-19 03:07:09.483904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:07:09.484020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:07:09.484086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 03:07:09.484094 | orchestrator | 2026-03-19 03:07:09.484099 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-19 03:07:09.484105 | orchestrator | Thursday 19 March 2026 03:05:56 +0000 (0:00:03.300) 0:01:26.348 ******** 2026-03-19 03:07:09.484109 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:07:09.484114 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:07:09.484118 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:07:09.484122 | orchestrator | 2026-03-19 03:07:09.484126 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-19 03:07:09.484129 | orchestrator | Thursday 19 March 2026 03:05:57 +0000 (0:00:00.354) 0:01:26.702 ******** 2026-03-19 03:07:09.484133 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484137 | orchestrator | 2026-03-19 03:07:09.484141 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-19 03:07:09.484145 | orchestrator | Thursday 19 March 2026 03:05:59 +0000 (0:00:02.265) 0:01:28.968 ******** 2026-03-19 03:07:09.484149 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484153 | orchestrator | 2026-03-19 03:07:09.484157 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-19 03:07:09.484165 | orchestrator | Thursday 19 March 2026 03:06:01 +0000 (0:00:02.448) 0:01:31.416 ******** 2026-03-19 03:07:09.484169 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484173 | orchestrator | 2026-03-19 03:07:09.484177 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-19 03:07:09.484180 | orchestrator | Thursday 19 March 2026 03:06:04 +0000 (0:00:02.236) 0:01:33.653 ******** 2026-03-19 03:07:09.484184 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484188 | orchestrator | 2026-03-19 03:07:09.484192 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-19 03:07:09.484196 | orchestrator | Thursday 19 March 2026 03:06:34 +0000 (0:00:30.241) 0:02:03.895 ******** 2026-03-19 03:07:09.484199 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484203 | orchestrator | 2026-03-19 03:07:09.484207 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-19 03:07:09.484211 | orchestrator | Thursday 19 March 2026 03:06:36 +0000 (0:00:02.299) 0:02:06.194 ******** 2026-03-19 03:07:09.484215 | orchestrator | 2026-03-19 03:07:09.484218 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-19 03:07:09.484222 | orchestrator | Thursday 19 March 2026 03:06:36 +0000 (0:00:00.072) 0:02:06.266 ******** 2026-03-19 03:07:09.484226 | orchestrator | 2026-03-19 03:07:09.484230 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-19 03:07:09.484234 | orchestrator | Thursday 19 March 2026 03:06:36 +0000 (0:00:00.069) 0:02:06.336 ******** 2026-03-19 03:07:09.484237 | orchestrator | 2026-03-19 03:07:09.484241 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-19 03:07:09.484245 | orchestrator | Thursday 19 March 2026 03:06:36 +0000 (0:00:00.070) 0:02:06.407 ******** 2026-03-19 03:07:09.484249 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:07:09.484252 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:07:09.484256 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:07:09.484260 | orchestrator | 2026-03-19 03:07:09.484264 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:07:09.484269 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 03:07:09.484274 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 03:07:09.484278 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 03:07:09.484282 | orchestrator | 2026-03-19 03:07:09.484286 | orchestrator | 2026-03-19 03:07:09.484290 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:07:09.484294 | orchestrator | Thursday 19 March 2026 03:07:09 +0000 (0:00:32.473) 0:02:38.880 ******** 2026-03-19 03:07:09.484297 | orchestrator | =============================================================================== 2026-03-19 03:07:09.484301 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.47s 2026-03-19 03:07:09.484305 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.24s 2026-03-19 03:07:09.484309 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.03s 2026-03-19 03:07:09.484316 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.54s 2026-03-19 03:07:09.803980 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.45s 2026-03-19 03:07:09.804089 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.08s 2026-03-19 03:07:09.804102 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.91s 2026-03-19 03:07:09.804111 | orchestrator | glance : Copying over config.json files for services -------------------- 3.88s 2026-03-19 03:07:09.804120 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.70s 2026-03-19 03:07:09.804191 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.65s 2026-03-19 03:07:09.804209 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.55s 2026-03-19 03:07:09.804223 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.53s 2026-03-19 03:07:09.804236 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.50s 2026-03-19 03:07:09.804251 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.44s 2026-03-19 03:07:09.804266 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.38s 2026-03-19 03:07:09.804298 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.35s 2026-03-19 03:07:09.804326 | orchestrator | glance : Check glance containers ---------------------------------------- 3.30s 2026-03-19 03:07:09.804341 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.23s 2026-03-19 03:07:09.804355 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.20s 2026-03-19 03:07:09.804370 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.17s 2026-03-19 03:07:12.132395 | orchestrator | 2026-03-19 03:07:12 | INFO  | Task 3c38d167-4681-4758-9f34-71dc367ec14b (cinder) was prepared for execution. 2026-03-19 03:07:12.132501 | orchestrator | 2026-03-19 03:07:12 | INFO  | It takes a moment until task 3c38d167-4681-4758-9f34-71dc367ec14b (cinder) has been started and output is visible here. 2026-03-19 03:07:49.207569 | orchestrator | 2026-03-19 03:07:49.207737 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:07:49.207755 | orchestrator | 2026-03-19 03:07:49.207765 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:07:49.207775 | orchestrator | Thursday 19 March 2026 03:07:16 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-03-19 03:07:49.207784 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:07:49.207794 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:07:49.207803 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:07:49.207812 | orchestrator | 2026-03-19 03:07:49.207821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:07:49.207830 | orchestrator | Thursday 19 March 2026 03:07:16 +0000 (0:00:00.303) 0:00:00.567 ******** 2026-03-19 03:07:49.207839 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-19 03:07:49.207848 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-19 03:07:49.207856 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-19 03:07:49.207865 | orchestrator | 2026-03-19 03:07:49.207874 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-19 03:07:49.207882 | orchestrator | 2026-03-19 03:07:49.207891 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 03:07:49.207900 | orchestrator | Thursday 19 March 2026 03:07:17 +0000 (0:00:00.436) 0:00:01.004 ******** 2026-03-19 03:07:49.207909 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:07:49.207918 | orchestrator | 2026-03-19 03:07:49.207926 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-19 03:07:49.207935 | orchestrator | Thursday 19 March 2026 03:07:17 +0000 (0:00:00.559) 0:00:01.564 ******** 2026-03-19 03:07:49.207949 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-19 03:07:49.207964 | orchestrator | 2026-03-19 03:07:49.207979 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-19 03:07:49.207994 | orchestrator | Thursday 19 March 2026 03:07:21 +0000 (0:00:03.686) 0:00:05.250 ******** 2026-03-19 03:07:49.208010 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-19 03:07:49.208025 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-19 03:07:49.208086 | orchestrator | 2026-03-19 03:07:49.208104 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-19 03:07:49.208121 | orchestrator | Thursday 19 March 2026 03:07:28 +0000 (0:00:06.882) 0:00:12.133 ******** 2026-03-19 03:07:49.208139 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:07:49.208154 | orchestrator | 2026-03-19 03:07:49.208168 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-19 03:07:49.208177 | orchestrator | Thursday 19 March 2026 03:07:31 +0000 (0:00:03.548) 0:00:15.682 ******** 2026-03-19 03:07:49.208186 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:07:49.208195 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-19 03:07:49.208204 | orchestrator | 2026-03-19 03:07:49.208213 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-19 03:07:49.208221 | orchestrator | Thursday 19 March 2026 03:07:35 +0000 (0:00:04.128) 0:00:19.810 ******** 2026-03-19 03:07:49.208230 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:07:49.208239 | orchestrator | 2026-03-19 03:07:49.208247 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-19 03:07:49.208256 | orchestrator | Thursday 19 March 2026 03:07:39 +0000 (0:00:03.422) 0:00:23.232 ******** 2026-03-19 03:07:49.208265 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-19 03:07:49.208273 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-19 03:07:49.208282 | orchestrator | 2026-03-19 03:07:49.208290 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-19 03:07:49.208299 | orchestrator | Thursday 19 March 2026 03:07:47 +0000 (0:00:07.878) 0:00:31.110 ******** 2026-03-19 03:07:49.208330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:07:49.208373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:07:49.208390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:07:49.208418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:49.208435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:49.208451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:49.208462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:49.208478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:54.913498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:54.913635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:54.913650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:54.913756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:07:54.913767 | orchestrator | 2026-03-19 03:07:54.913775 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 03:07:54.913790 | orchestrator | Thursday 19 March 2026 03:07:49 +0000 (0:00:02.117) 0:00:33.228 ******** 2026-03-19 03:07:54.913797 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:07:54.913811 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:07:54.913817 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:07:54.913823 | orchestrator | 2026-03-19 03:07:54.913828 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 03:07:54.913835 | orchestrator | Thursday 19 March 2026 03:07:49 +0000 (0:00:00.460) 0:00:33.689 ******** 2026-03-19 03:07:54.913841 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:07:54.913847 | orchestrator | 2026-03-19 03:07:54.913854 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-19 03:07:54.913860 | orchestrator | Thursday 19 March 2026 03:07:50 +0000 (0:00:00.548) 0:00:34.237 ******** 2026-03-19 03:07:54.913867 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-19 03:07:54.913874 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-19 03:07:54.913880 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-19 03:07:54.913894 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-19 03:07:54.913900 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-19 03:07:54.913905 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-19 03:07:54.913911 | orchestrator | 2026-03-19 03:07:54.913917 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-19 03:07:54.913923 | orchestrator | Thursday 19 March 2026 03:07:51 +0000 (0:00:01.615) 0:00:35.853 ******** 2026-03-19 03:07:54.913947 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:07:54.913957 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:07:54.913971 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:07:54.913976 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:07:54.913987 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:08:05.925008 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-19 03:08:05.925128 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925160 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925169 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925179 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925231 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925241 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-19 03:08:05.925250 | orchestrator | 2026-03-19 03:08:05.925260 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-19 03:08:05.925270 | orchestrator | Thursday 19 March 2026 03:07:55 +0000 (0:00:03.289) 0:00:39.142 ******** 2026-03-19 03:08:05.925279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:08:05.925289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:08:05.925297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-19 03:08:05.925305 | orchestrator | 2026-03-19 03:08:05.925312 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-19 03:08:05.925320 | orchestrator | Thursday 19 March 2026 03:07:56 +0000 (0:00:01.480) 0:00:40.623 ******** 2026-03-19 03:08:05.925328 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-19 03:08:05.925335 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-19 03:08:05.925343 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-19 03:08:05.925357 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 03:08:05.925366 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 03:08:05.925374 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-19 03:08:05.925382 | orchestrator | 2026-03-19 03:08:05.925390 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-19 03:08:05.925398 | orchestrator | Thursday 19 March 2026 03:07:59 +0000 (0:00:02.743) 0:00:43.366 ******** 2026-03-19 03:08:05.925407 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-19 03:08:05.925422 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-19 03:08:05.925430 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-19 03:08:05.925438 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-19 03:08:05.925446 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-19 03:08:05.925454 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-19 03:08:05.925462 | orchestrator | 2026-03-19 03:08:05.925471 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-19 03:08:05.925479 | orchestrator | Thursday 19 March 2026 03:08:00 +0000 (0:00:01.171) 0:00:44.538 ******** 2026-03-19 03:08:05.925487 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:05.925497 | orchestrator | 2026-03-19 03:08:05.925506 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-19 03:08:05.925515 | orchestrator | Thursday 19 March 2026 03:08:00 +0000 (0:00:00.131) 0:00:44.670 ******** 2026-03-19 03:08:05.925524 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:05.925534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:08:05.925543 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:08:05.925550 | orchestrator | 2026-03-19 03:08:05.925559 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 03:08:05.925567 | orchestrator | Thursday 19 March 2026 03:08:01 +0000 (0:00:00.507) 0:00:45.177 ******** 2026-03-19 03:08:05.925576 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:08:05.925585 | orchestrator | 2026-03-19 03:08:05.925595 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-19 03:08:05.925604 | orchestrator | Thursday 19 March 2026 03:08:01 +0000 (0:00:00.556) 0:00:45.733 ******** 2026-03-19 03:08:05.925623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:06.817419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:06.817532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:06.817560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:06.817619 | orchestrator | 2026-03-19 03:08:06.817624 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-19 03:08:06.817630 | orchestrator | Thursday 19 March 2026 03:08:06 +0000 (0:00:04.213) 0:00:49.947 ******** 2026-03-19 03:08:06.817637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:06.919220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919351 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:06.919357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:06.919363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919396 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:08:06.919401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:06.919406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:06.919422 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:08:06.919426 | orchestrator | 2026-03-19 03:08:06.919431 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-19 03:08:06.919439 | orchestrator | Thursday 19 March 2026 03:08:06 +0000 (0:00:00.905) 0:00:50.853 ******** 2026-03-19 03:08:07.460113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:07.460213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460245 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:07.460263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:07.460328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460381 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:08:07.460396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:07.460409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:07.460439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:12.062810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:12.062932 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:08:12.062945 | orchestrator | 2026-03-19 03:08:12.062955 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-19 03:08:12.062963 | orchestrator | Thursday 19 March 2026 03:08:07 +0000 (0:00:00.839) 0:00:51.693 ******** 2026-03-19 03:08:12.062972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:12.062982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:12.062990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:12.063038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:12.063105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609259 | orchestrator | 2026-03-19 03:08:24.609273 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-19 03:08:24.609285 | orchestrator | Thursday 19 March 2026 03:08:12 +0000 (0:00:04.387) 0:00:56.080 ******** 2026-03-19 03:08:24.609297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 03:08:24.609309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 03:08:24.609320 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-19 03:08:24.609331 | orchestrator | 2026-03-19 03:08:24.609342 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-19 03:08:24.609353 | orchestrator | Thursday 19 March 2026 03:08:13 +0000 (0:00:01.764) 0:00:57.844 ******** 2026-03-19 03:08:24.609366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:24.609406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:24.609444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:24.609457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:24.609540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:27.068391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:27.068492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:27.068547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:27.068561 | orchestrator | 2026-03-19 03:08:27.068572 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-19 03:08:27.068581 | orchestrator | Thursday 19 March 2026 03:08:24 +0000 (0:00:10.803) 0:01:08.648 ******** 2026-03-19 03:08:27.068590 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:08:27.068598 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:08:27.068606 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:08:27.068614 | orchestrator | 2026-03-19 03:08:27.068622 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-19 03:08:27.068630 | orchestrator | Thursday 19 March 2026 03:08:26 +0000 (0:00:01.563) 0:01:10.211 ******** 2026-03-19 03:08:27.068693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:27.068705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:27.068748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:27.068758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:27.068774 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:27.068782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:27.068791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:27.068799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:27.068819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:30.642116 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:08:30.642218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-19 03:08:30.642255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:08:30.642265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 03:08:30.642273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 03:08:30.642281 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:08:30.642289 | orchestrator | 2026-03-19 03:08:30.642298 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-19 03:08:30.642306 | orchestrator | Thursday 19 March 2026 03:08:27 +0000 (0:00:00.873) 0:01:11.085 ******** 2026-03-19 03:08:30.642314 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:08:30.642321 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:08:30.642328 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:08:30.642335 | orchestrator | 2026-03-19 03:08:30.642342 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-19 03:08:30.642350 | orchestrator | Thursday 19 March 2026 03:08:27 +0000 (0:00:00.560) 0:01:11.645 ******** 2026-03-19 03:08:30.642386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:30.642402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:30.642410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-19 03:08:30.642418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:30.642426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:30.642438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:08:30.642452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-19 03:09:56.524740 | orchestrator | 2026-03-19 03:09:56.524748 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-19 03:09:56.524756 | orchestrator | Thursday 19 March 2026 03:08:30 +0000 (0:00:03.030) 0:01:14.676 ******** 2026-03-19 03:09:56.524763 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:09:56.524770 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:09:56.524776 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:09:56.524782 | orchestrator | 2026-03-19 03:09:56.524788 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-19 03:09:56.524794 | orchestrator | Thursday 19 March 2026 03:08:31 +0000 (0:00:00.356) 0:01:15.032 ******** 2026-03-19 03:09:56.524800 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524807 | orchestrator | 2026-03-19 03:09:56.524825 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-19 03:09:56.524831 | orchestrator | Thursday 19 March 2026 03:08:33 +0000 (0:00:02.252) 0:01:17.284 ******** 2026-03-19 03:09:56.524837 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524843 | orchestrator | 2026-03-19 03:09:56.524849 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-19 03:09:56.524854 | orchestrator | Thursday 19 March 2026 03:08:35 +0000 (0:00:02.453) 0:01:19.738 ******** 2026-03-19 03:09:56.524860 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524866 | orchestrator | 2026-03-19 03:09:56.524872 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 03:09:56.524878 | orchestrator | Thursday 19 March 2026 03:08:56 +0000 (0:00:20.476) 0:01:40.215 ******** 2026-03-19 03:09:56.524883 | orchestrator | 2026-03-19 03:09:56.524889 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 03:09:56.524895 | orchestrator | Thursday 19 March 2026 03:08:56 +0000 (0:00:00.085) 0:01:40.300 ******** 2026-03-19 03:09:56.524900 | orchestrator | 2026-03-19 03:09:56.524906 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-19 03:09:56.524913 | orchestrator | Thursday 19 March 2026 03:08:56 +0000 (0:00:00.074) 0:01:40.375 ******** 2026-03-19 03:09:56.524919 | orchestrator | 2026-03-19 03:09:56.524925 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-19 03:09:56.524930 | orchestrator | Thursday 19 March 2026 03:08:56 +0000 (0:00:00.073) 0:01:40.449 ******** 2026-03-19 03:09:56.524934 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524938 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:09:56.524941 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:09:56.524945 | orchestrator | 2026-03-19 03:09:56.524949 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-19 03:09:56.524953 | orchestrator | Thursday 19 March 2026 03:09:19 +0000 (0:00:23.061) 0:02:03.510 ******** 2026-03-19 03:09:56.524956 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524960 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:09:56.524964 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:09:56.524968 | orchestrator | 2026-03-19 03:09:56.524971 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-19 03:09:56.524975 | orchestrator | Thursday 19 March 2026 03:09:24 +0000 (0:00:05.077) 0:02:08.587 ******** 2026-03-19 03:09:56.524979 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.524983 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:09:56.524987 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:09:56.524990 | orchestrator | 2026-03-19 03:09:56.524994 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-19 03:09:56.524998 | orchestrator | Thursday 19 March 2026 03:09:45 +0000 (0:00:20.608) 0:02:29.196 ******** 2026-03-19 03:09:56.525002 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:09:56.525010 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:09:56.525014 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:09:56.525018 | orchestrator | 2026-03-19 03:09:56.525021 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-19 03:09:56.525026 | orchestrator | Thursday 19 March 2026 03:09:56 +0000 (0:00:10.979) 0:02:40.176 ******** 2026-03-19 03:09:56.525030 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:09:56.525033 | orchestrator | 2026-03-19 03:09:56.525037 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:09:56.525045 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 03:09:56.525053 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:09:56.525059 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:09:56.525064 | orchestrator | 2026-03-19 03:09:56.525070 | orchestrator | 2026-03-19 03:09:56.525077 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:09:56.525083 | orchestrator | Thursday 19 March 2026 03:09:56 +0000 (0:00:00.265) 0:02:40.441 ******** 2026-03-19 03:09:56.525089 | orchestrator | =============================================================================== 2026-03-19 03:09:56.525096 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.06s 2026-03-19 03:09:56.525103 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 20.61s 2026-03-19 03:09:56.525109 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.48s 2026-03-19 03:09:56.525122 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.98s 2026-03-19 03:09:56.525129 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.80s 2026-03-19 03:09:56.525133 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.88s 2026-03-19 03:09:56.525138 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.88s 2026-03-19 03:09:56.525142 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.08s 2026-03-19 03:09:56.525146 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.39s 2026-03-19 03:09:56.525151 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.21s 2026-03-19 03:09:56.525155 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.13s 2026-03-19 03:09:56.525160 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.69s 2026-03-19 03:09:56.525164 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.55s 2026-03-19 03:09:56.525169 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.42s 2026-03-19 03:09:56.525178 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.29s 2026-03-19 03:09:56.873940 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.03s 2026-03-19 03:09:56.874100 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.74s 2026-03-19 03:09:56.874117 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.45s 2026-03-19 03:09:56.874126 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.25s 2026-03-19 03:09:56.874135 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.12s 2026-03-19 03:09:59.233600 | orchestrator | 2026-03-19 03:09:59 | INFO  | Task cdbddba4-ce15-4afb-8f2d-1353831ba7f8 (barbican) was prepared for execution. 2026-03-19 03:09:59.233674 | orchestrator | 2026-03-19 03:09:59 | INFO  | It takes a moment until task cdbddba4-ce15-4afb-8f2d-1353831ba7f8 (barbican) has been started and output is visible here. 2026-03-19 03:10:45.697601 | orchestrator | 2026-03-19 03:10:45.697747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:10:45.697776 | orchestrator | 2026-03-19 03:10:45.697797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:10:45.697819 | orchestrator | Thursday 19 March 2026 03:10:03 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-19 03:10:45.697838 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:10:45.697860 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:10:45.697881 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:10:45.697901 | orchestrator | 2026-03-19 03:10:45.697922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:10:45.697941 | orchestrator | Thursday 19 March 2026 03:10:03 +0000 (0:00:00.324) 0:00:00.585 ******** 2026-03-19 03:10:45.697961 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-19 03:10:45.697981 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-19 03:10:45.698001 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-19 03:10:45.698098 | orchestrator | 2026-03-19 03:10:45.698121 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-19 03:10:45.698143 | orchestrator | 2026-03-19 03:10:45.698163 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 03:10:45.698182 | orchestrator | Thursday 19 March 2026 03:10:04 +0000 (0:00:00.477) 0:00:01.062 ******** 2026-03-19 03:10:45.698203 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:10:45.698224 | orchestrator | 2026-03-19 03:10:45.698243 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-19 03:10:45.698265 | orchestrator | Thursday 19 March 2026 03:10:04 +0000 (0:00:00.543) 0:00:01.606 ******** 2026-03-19 03:10:45.698286 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-19 03:10:45.698305 | orchestrator | 2026-03-19 03:10:45.698325 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-19 03:10:45.698345 | orchestrator | Thursday 19 March 2026 03:10:08 +0000 (0:00:03.714) 0:00:05.321 ******** 2026-03-19 03:10:45.698365 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-19 03:10:45.698386 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-19 03:10:45.698406 | orchestrator | 2026-03-19 03:10:45.698426 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-19 03:10:45.698446 | orchestrator | Thursday 19 March 2026 03:10:15 +0000 (0:00:06.851) 0:00:12.172 ******** 2026-03-19 03:10:45.698466 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:10:45.698486 | orchestrator | 2026-03-19 03:10:45.698506 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-19 03:10:45.698526 | orchestrator | Thursday 19 March 2026 03:10:18 +0000 (0:00:03.472) 0:00:15.644 ******** 2026-03-19 03:10:45.698625 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:10:45.698647 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-19 03:10:45.698666 | orchestrator | 2026-03-19 03:10:45.698684 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-19 03:10:45.698703 | orchestrator | Thursday 19 March 2026 03:10:23 +0000 (0:00:04.427) 0:00:20.072 ******** 2026-03-19 03:10:45.698721 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:10:45.698765 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-19 03:10:45.698783 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-19 03:10:45.698802 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-19 03:10:45.698820 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-19 03:10:45.698838 | orchestrator | 2026-03-19 03:10:45.698856 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-19 03:10:45.698910 | orchestrator | Thursday 19 March 2026 03:10:39 +0000 (0:00:16.653) 0:00:36.726 ******** 2026-03-19 03:10:45.698929 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-19 03:10:45.698946 | orchestrator | 2026-03-19 03:10:45.698965 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-19 03:10:45.698983 | orchestrator | Thursday 19 March 2026 03:10:43 +0000 (0:00:04.016) 0:00:40.743 ******** 2026-03-19 03:10:45.699007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:45.699057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:45.699076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:45.699104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:45.699138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:45.699157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:45.699188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630436 | orchestrator | 2026-03-19 03:10:51.630446 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-19 03:10:51.630457 | orchestrator | Thursday 19 March 2026 03:10:45 +0000 (0:00:01.711) 0:00:42.454 ******** 2026-03-19 03:10:51.630467 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-19 03:10:51.630476 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-19 03:10:51.630484 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-19 03:10:51.630493 | orchestrator | 2026-03-19 03:10:51.630501 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-19 03:10:51.630509 | orchestrator | Thursday 19 March 2026 03:10:46 +0000 (0:00:01.189) 0:00:43.644 ******** 2026-03-19 03:10:51.630605 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:10:51.630618 | orchestrator | 2026-03-19 03:10:51.630627 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-19 03:10:51.630635 | orchestrator | Thursday 19 March 2026 03:10:47 +0000 (0:00:00.350) 0:00:43.995 ******** 2026-03-19 03:10:51.630643 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:10:51.630652 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:10:51.630660 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:10:51.630668 | orchestrator | 2026-03-19 03:10:51.630690 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 03:10:51.630710 | orchestrator | Thursday 19 March 2026 03:10:47 +0000 (0:00:00.306) 0:00:44.301 ******** 2026-03-19 03:10:51.630715 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:10:51.630721 | orchestrator | 2026-03-19 03:10:51.630736 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-19 03:10:51.630744 | orchestrator | Thursday 19 March 2026 03:10:48 +0000 (0:00:00.639) 0:00:44.941 ******** 2026-03-19 03:10:51.630754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:51.630781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:51.630790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:51.630808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:51.630856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:52.965335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:10:52.965418 | orchestrator | 2026-03-19 03:10:52.965426 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-19 03:10:52.965453 | orchestrator | Thursday 19 March 2026 03:10:51 +0000 (0:00:03.447) 0:00:48.388 ******** 2026-03-19 03:10:52.965459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:52.965477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965486 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:10:52.965492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:52.965509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965527 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:10:52.965537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:52.965584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:52.965592 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:10:52.965596 | orchestrator | 2026-03-19 03:10:52.965600 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-19 03:10:52.965603 | orchestrator | Thursday 19 March 2026 03:10:52 +0000 (0:00:00.580) 0:00:48.969 ******** 2026-03-19 03:10:52.965613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:56.539677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539766 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:10:56.539790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:56.539795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539805 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:10:56.539821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:10:56.539845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:10:56.539857 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:10:56.539862 | orchestrator | 2026-03-19 03:10:56.539867 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-19 03:10:56.539872 | orchestrator | Thursday 19 March 2026 03:10:52 +0000 (0:00:00.761) 0:00:49.731 ******** 2026-03-19 03:10:56.539877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:56.539882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:10:56.539894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:06.141083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.141310 | orchestrator | 2026-03-19 03:11:06.141323 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-19 03:11:06.141334 | orchestrator | Thursday 19 March 2026 03:10:56 +0000 (0:00:03.566) 0:00:53.298 ******** 2026-03-19 03:11:06.141344 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:06.141355 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:11:06.141365 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:11:06.141375 | orchestrator | 2026-03-19 03:11:06.141401 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-19 03:11:06.141411 | orchestrator | Thursday 19 March 2026 03:10:58 +0000 (0:00:01.562) 0:00:54.860 ******** 2026-03-19 03:11:06.141421 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:11:06.141431 | orchestrator | 2026-03-19 03:11:06.141441 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-19 03:11:06.141450 | orchestrator | Thursday 19 March 2026 03:10:58 +0000 (0:00:00.897) 0:00:55.758 ******** 2026-03-19 03:11:06.141460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:11:06.141470 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:11:06.141479 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:11:06.141489 | orchestrator | 2026-03-19 03:11:06.141499 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-19 03:11:06.141508 | orchestrator | Thursday 19 March 2026 03:10:59 +0000 (0:00:00.580) 0:00:56.338 ******** 2026-03-19 03:11:06.141589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:06.141604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:06.141625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:06.141645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:06.968741 | orchestrator | 2026-03-19 03:11:06.968752 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-19 03:11:06.968764 | orchestrator | Thursday 19 March 2026 03:11:06 +0000 (0:00:06.563) 0:01:02.902 ******** 2026-03-19 03:11:06.968792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:11:06.968809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:11:06.968820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:11:06.968830 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:11:06.968842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:11:06.968862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:11:06.968872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:11:06.968882 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:11:06.968901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-19 03:11:09.336981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:11:09.337100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:11:09.337144 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:11:09.337159 | orchestrator | 2026-03-19 03:11:09.337171 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-19 03:11:09.337183 | orchestrator | Thursday 19 March 2026 03:11:06 +0000 (0:00:00.828) 0:01:03.731 ******** 2026-03-19 03:11:09.337195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:09.337208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:09.337246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-19 03:11:09.337259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:11:09.337337 | orchestrator | 2026-03-19 03:11:09.337349 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-19 03:11:09.337367 | orchestrator | Thursday 19 March 2026 03:11:09 +0000 (0:00:02.356) 0:01:06.088 ******** 2026-03-19 03:11:53.239324 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:11:53.239441 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:11:53.239448 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:11:53.239473 | orchestrator | 2026-03-19 03:11:53.239478 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-19 03:11:53.239484 | orchestrator | Thursday 19 March 2026 03:11:09 +0000 (0:00:00.310) 0:01:06.398 ******** 2026-03-19 03:11:53.239488 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239492 | orchestrator | 2026-03-19 03:11:53.239496 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-19 03:11:53.239500 | orchestrator | Thursday 19 March 2026 03:11:11 +0000 (0:00:02.281) 0:01:08.680 ******** 2026-03-19 03:11:53.239564 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239569 | orchestrator | 2026-03-19 03:11:53.239573 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-19 03:11:53.239577 | orchestrator | Thursday 19 March 2026 03:11:14 +0000 (0:00:02.466) 0:01:11.146 ******** 2026-03-19 03:11:53.239581 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239592 | orchestrator | 2026-03-19 03:11:53.239597 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 03:11:53.239601 | orchestrator | Thursday 19 March 2026 03:11:26 +0000 (0:00:12.408) 0:01:23.555 ******** 2026-03-19 03:11:53.239605 | orchestrator | 2026-03-19 03:11:53.239609 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 03:11:53.239614 | orchestrator | Thursday 19 March 2026 03:11:26 +0000 (0:00:00.073) 0:01:23.629 ******** 2026-03-19 03:11:53.239618 | orchestrator | 2026-03-19 03:11:53.239622 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-19 03:11:53.239626 | orchestrator | Thursday 19 March 2026 03:11:26 +0000 (0:00:00.072) 0:01:23.701 ******** 2026-03-19 03:11:53.239630 | orchestrator | 2026-03-19 03:11:53.239634 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-19 03:11:53.239638 | orchestrator | Thursday 19 March 2026 03:11:27 +0000 (0:00:00.074) 0:01:23.776 ******** 2026-03-19 03:11:53.239642 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:11:53.239646 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239650 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:11:53.239654 | orchestrator | 2026-03-19 03:11:53.239658 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-19 03:11:53.239663 | orchestrator | Thursday 19 March 2026 03:11:37 +0000 (0:00:10.810) 0:01:34.587 ******** 2026-03-19 03:11:53.239667 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239671 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:11:53.239676 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:11:53.239680 | orchestrator | 2026-03-19 03:11:53.239684 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-19 03:11:53.239688 | orchestrator | Thursday 19 March 2026 03:11:42 +0000 (0:00:04.858) 0:01:39.445 ******** 2026-03-19 03:11:53.239692 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:11:53.239696 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:11:53.239700 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:11:53.239704 | orchestrator | 2026-03-19 03:11:53.239708 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:11:53.239714 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:11:53.239719 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:11:53.239724 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:11:53.239728 | orchestrator | 2026-03-19 03:11:53.239732 | orchestrator | 2026-03-19 03:11:53.239736 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:11:53.239740 | orchestrator | Thursday 19 March 2026 03:11:52 +0000 (0:00:10.327) 0:01:49.772 ******** 2026-03-19 03:11:53.239744 | orchestrator | =============================================================================== 2026-03-19 03:11:53.239754 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.65s 2026-03-19 03:11:53.239758 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.41s 2026-03-19 03:11:53.239762 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.81s 2026-03-19 03:11:53.239766 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.33s 2026-03-19 03:11:53.239770 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.85s 2026-03-19 03:11:53.239776 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.56s 2026-03-19 03:11:53.239783 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.86s 2026-03-19 03:11:53.239789 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.43s 2026-03-19 03:11:53.239795 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.02s 2026-03-19 03:11:53.239801 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.71s 2026-03-19 03:11:53.239808 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.57s 2026-03-19 03:11:53.239815 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2026-03-19 03:11:53.239820 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.45s 2026-03-19 03:11:53.239826 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.47s 2026-03-19 03:11:53.239832 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.36s 2026-03-19 03:11:53.239854 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2026-03-19 03:11:53.239868 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.71s 2026-03-19 03:11:53.239876 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.56s 2026-03-19 03:11:53.239882 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.19s 2026-03-19 03:11:53.239890 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.90s 2026-03-19 03:11:55.236149 | orchestrator | 2026-03-19 03:11:55 | INFO  | Task 39723a4f-16c0-4073-9b75-d0cb980b427a (designate) was prepared for execution. 2026-03-19 03:11:55.236230 | orchestrator | 2026-03-19 03:11:55 | INFO  | It takes a moment until task 39723a4f-16c0-4073-9b75-d0cb980b427a (designate) has been started and output is visible here. 2026-03-19 03:12:28.338920 | orchestrator | 2026-03-19 03:12:28.339099 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:12:28.339135 | orchestrator | 2026-03-19 03:12:28.339217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:12:28.339243 | orchestrator | Thursday 19 March 2026 03:11:59 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-19 03:12:28.339263 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:12:28.339283 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:12:28.339303 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:12:28.339321 | orchestrator | 2026-03-19 03:12:28.339337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:12:28.339348 | orchestrator | Thursday 19 March 2026 03:11:59 +0000 (0:00:00.315) 0:00:00.584 ******** 2026-03-19 03:12:28.339360 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-19 03:12:28.339371 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-19 03:12:28.339382 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-19 03:12:28.339393 | orchestrator | 2026-03-19 03:12:28.339404 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-19 03:12:28.339418 | orchestrator | 2026-03-19 03:12:28.339431 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 03:12:28.339443 | orchestrator | Thursday 19 March 2026 03:12:00 +0000 (0:00:00.434) 0:00:01.019 ******** 2026-03-19 03:12:28.339561 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:12:28.339588 | orchestrator | 2026-03-19 03:12:28.339609 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-19 03:12:28.339624 | orchestrator | Thursday 19 March 2026 03:12:00 +0000 (0:00:00.598) 0:00:01.617 ******** 2026-03-19 03:12:28.339637 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-19 03:12:28.339649 | orchestrator | 2026-03-19 03:12:28.339662 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-19 03:12:28.339674 | orchestrator | Thursday 19 March 2026 03:12:04 +0000 (0:00:03.688) 0:00:05.306 ******** 2026-03-19 03:12:28.339687 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-19 03:12:28.339699 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-19 03:12:28.339713 | orchestrator | 2026-03-19 03:12:28.339725 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-19 03:12:28.339737 | orchestrator | Thursday 19 March 2026 03:12:11 +0000 (0:00:07.029) 0:00:12.335 ******** 2026-03-19 03:12:28.339749 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:12:28.339761 | orchestrator | 2026-03-19 03:12:28.339773 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-19 03:12:28.339785 | orchestrator | Thursday 19 March 2026 03:12:14 +0000 (0:00:03.417) 0:00:15.753 ******** 2026-03-19 03:12:28.339798 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:12:28.339810 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-19 03:12:28.339822 | orchestrator | 2026-03-19 03:12:28.339833 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-19 03:12:28.339844 | orchestrator | Thursday 19 March 2026 03:12:19 +0000 (0:00:04.185) 0:00:19.938 ******** 2026-03-19 03:12:28.339854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:12:28.339865 | orchestrator | 2026-03-19 03:12:28.339875 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-19 03:12:28.339885 | orchestrator | Thursday 19 March 2026 03:12:22 +0000 (0:00:03.321) 0:00:23.260 ******** 2026-03-19 03:12:28.339894 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-19 03:12:28.339904 | orchestrator | 2026-03-19 03:12:28.339913 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-19 03:12:28.339923 | orchestrator | Thursday 19 March 2026 03:12:26 +0000 (0:00:03.929) 0:00:27.189 ******** 2026-03-19 03:12:28.339953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:28.339995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:28.340017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:28.340029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:28.340040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:28.340050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:28.340066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:28.340090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:34.209454 | orchestrator | 2026-03-19 03:12:34.209465 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-19 03:12:34.209476 | orchestrator | Thursday 19 March 2026 03:12:29 +0000 (0:00:02.857) 0:00:30.047 ******** 2026-03-19 03:12:34.209574 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:12:34.209586 | orchestrator | 2026-03-19 03:12:34.209596 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-19 03:12:34.209606 | orchestrator | Thursday 19 March 2026 03:12:29 +0000 (0:00:00.129) 0:00:30.176 ******** 2026-03-19 03:12:34.209615 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:12:34.209626 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:12:34.209635 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:12:34.209645 | orchestrator | 2026-03-19 03:12:34.209655 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 03:12:34.209665 | orchestrator | Thursday 19 March 2026 03:12:29 +0000 (0:00:00.398) 0:00:30.574 ******** 2026-03-19 03:12:34.209675 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:12:34.209697 | orchestrator | 2026-03-19 03:12:34.209706 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-19 03:12:34.209717 | orchestrator | Thursday 19 March 2026 03:12:30 +0000 (0:00:00.465) 0:00:31.039 ******** 2026-03-19 03:12:34.209734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:34.209756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:35.965388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:35.965542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:35.965793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:36.856099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:36.856192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:36.856201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:36.856235 | orchestrator | 2026-03-19 03:12:36.856243 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-19 03:12:36.856251 | orchestrator | Thursday 19 March 2026 03:12:35 +0000 (0:00:05.843) 0:00:36.883 ******** 2026-03-19 03:12:36.856275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:36.856293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:36.856321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:36.856328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:36.856335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:36.856347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:36.856354 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:12:36.856365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:36.856372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:36.856378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:36.856389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595664 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:12:37.595683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:37.595690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:37.595694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595739 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:12:37.595743 | orchestrator | 2026-03-19 03:12:37.595748 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-19 03:12:37.595753 | orchestrator | Thursday 19 March 2026 03:12:36 +0000 (0:00:00.991) 0:00:37.875 ******** 2026-03-19 03:12:37.595760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:37.595764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:37.595768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.595775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957884 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:12:37.957912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:37.957922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:37.957931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.957997 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:12:37.958008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:12:37.958063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:12:37.958072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.958080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:12:37.958100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:12:42.412208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:12:42.412314 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:12:42.412327 | orchestrator | 2026-03-19 03:12:42.412336 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-19 03:12:42.412345 | orchestrator | Thursday 19 March 2026 03:12:37 +0000 (0:00:00.999) 0:00:38.874 ******** 2026-03-19 03:12:42.412370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:42.412380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:42.412388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:42.412431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:42.412553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854313 | orchestrator | 2026-03-19 03:12:53.854318 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-19 03:12:53.854324 | orchestrator | Thursday 19 March 2026 03:12:44 +0000 (0:00:06.310) 0:00:45.185 ******** 2026-03-19 03:12:53.854333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:53.854339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:53.854348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:12:53.854354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:12:53.854363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:01.961871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:01.961992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:01.962240 | orchestrator | 2026-03-19 03:13:01.962247 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-19 03:13:01.962255 | orchestrator | Thursday 19 March 2026 03:12:58 +0000 (0:00:14.070) 0:00:59.255 ******** 2026-03-19 03:13:01.962266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 03:13:06.173160 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 03:13:06.173276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-19 03:13:06.173291 | orchestrator | 2026-03-19 03:13:06.173300 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-19 03:13:06.173309 | orchestrator | Thursday 19 March 2026 03:13:01 +0000 (0:00:03.624) 0:01:02.879 ******** 2026-03-19 03:13:06.173317 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 03:13:06.173324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 03:13:06.173334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-19 03:13:06.173338 | orchestrator | 2026-03-19 03:13:06.173359 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-19 03:13:06.173364 | orchestrator | Thursday 19 March 2026 03:13:04 +0000 (0:00:02.424) 0:01:05.304 ******** 2026-03-19 03:13:06.173394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:06.173406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:06.173413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:06.173436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:06.173447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:06.173459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:06.173520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:06.173528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:06.173535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:06.173542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:06.173557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:09.093091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:09.093191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:09.093199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:09.093204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:09.093209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:09.093214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:09.093229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:09.093238 | orchestrator | 2026-03-19 03:13:09.093243 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-19 03:13:09.093248 | orchestrator | Thursday 19 March 2026 03:13:07 +0000 (0:00:02.935) 0:01:08.240 ******** 2026-03-19 03:13:09.093256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:09.093262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:09.093266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:09.093270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:09.093277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:10.065911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:10.065968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:10.065987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:10.065994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:10.066005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:10.066011 | orchestrator | 2026-03-19 03:13:10.066055 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 03:13:10.066067 | orchestrator | Thursday 19 March 2026 03:13:10 +0000 (0:00:02.738) 0:01:10.978 ******** 2026-03-19 03:13:11.039396 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:13:11.039500 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:13:11.039506 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:13:11.039510 | orchestrator | 2026-03-19 03:13:11.039516 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-19 03:13:11.039535 | orchestrator | Thursday 19 March 2026 03:13:10 +0000 (0:00:00.306) 0:01:11.284 ******** 2026-03-19 03:13:11.039542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:11.039550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:13:11.039556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039608 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:13:11.039612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:11.039616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:13:11.039620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:11.039638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:13:14.590974 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:13:14.591093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-19 03:13:14.591108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 03:13:14.591117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 03:13:14.591127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 03:13:14.591160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 03:13:14.591169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:13:14.591176 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:13:14.591184 | orchestrator | 2026-03-19 03:13:14.591205 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-19 03:13:14.591219 | orchestrator | Thursday 19 March 2026 03:13:11 +0000 (0:00:00.784) 0:01:12.069 ******** 2026-03-19 03:13:14.591227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:13:14.591236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:13:14.591244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-19 03:13:14.591257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:14.591270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:13:16.416595 | orchestrator | 2026-03-19 03:13:16.416604 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-19 03:13:16.416613 | orchestrator | Thursday 19 March 2026 03:13:16 +0000 (0:00:04.975) 0:01:17.044 ******** 2026-03-19 03:13:16.416621 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:13:16.416635 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:14:25.817171 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:14:25.817264 | orchestrator | 2026-03-19 03:14:25.817288 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-19 03:14:25.817296 | orchestrator | Thursday 19 March 2026 03:13:16 +0000 (0:00:00.289) 0:01:17.334 ******** 2026-03-19 03:14:25.817302 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-19 03:14:25.817308 | orchestrator | 2026-03-19 03:14:25.817313 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-19 03:14:25.817318 | orchestrator | Thursday 19 March 2026 03:13:18 +0000 (0:00:02.586) 0:01:19.921 ******** 2026-03-19 03:14:25.817324 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-19 03:14:25.817329 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-19 03:14:25.817335 | orchestrator | 2026-03-19 03:14:25.817340 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-19 03:14:25.817345 | orchestrator | Thursday 19 March 2026 03:13:21 +0000 (0:00:02.581) 0:01:22.502 ******** 2026-03-19 03:14:25.817350 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817355 | orchestrator | 2026-03-19 03:14:25.817360 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 03:14:25.817365 | orchestrator | Thursday 19 March 2026 03:13:37 +0000 (0:00:16.274) 0:01:38.776 ******** 2026-03-19 03:14:25.817371 | orchestrator | 2026-03-19 03:14:25.817380 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 03:14:25.817440 | orchestrator | Thursday 19 March 2026 03:13:37 +0000 (0:00:00.070) 0:01:38.847 ******** 2026-03-19 03:14:25.817451 | orchestrator | 2026-03-19 03:14:25.817459 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-19 03:14:25.817467 | orchestrator | Thursday 19 March 2026 03:13:37 +0000 (0:00:00.070) 0:01:38.918 ******** 2026-03-19 03:14:25.817474 | orchestrator | 2026-03-19 03:14:25.817482 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-19 03:14:25.817491 | orchestrator | Thursday 19 March 2026 03:13:38 +0000 (0:00:00.074) 0:01:38.993 ******** 2026-03-19 03:14:25.817499 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817507 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817515 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817522 | orchestrator | 2026-03-19 03:14:25.817530 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-19 03:14:25.817539 | orchestrator | Thursday 19 March 2026 03:13:45 +0000 (0:00:07.542) 0:01:46.535 ******** 2026-03-19 03:14:25.817547 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817555 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817563 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817571 | orchestrator | 2026-03-19 03:14:25.817578 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-19 03:14:25.817587 | orchestrator | Thursday 19 March 2026 03:13:51 +0000 (0:00:05.578) 0:01:52.113 ******** 2026-03-19 03:14:25.817593 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817601 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817609 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817617 | orchestrator | 2026-03-19 03:14:25.817625 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-19 03:14:25.817633 | orchestrator | Thursday 19 March 2026 03:13:59 +0000 (0:00:08.601) 0:02:00.715 ******** 2026-03-19 03:14:25.817641 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817649 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817657 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817664 | orchestrator | 2026-03-19 03:14:25.817673 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-19 03:14:25.817681 | orchestrator | Thursday 19 March 2026 03:14:05 +0000 (0:00:05.826) 0:02:06.542 ******** 2026-03-19 03:14:25.817688 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817698 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817703 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817709 | orchestrator | 2026-03-19 03:14:25.817717 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-19 03:14:25.817726 | orchestrator | Thursday 19 March 2026 03:14:11 +0000 (0:00:05.686) 0:02:12.229 ******** 2026-03-19 03:14:25.817734 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817742 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:14:25.817750 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:14:25.817758 | orchestrator | 2026-03-19 03:14:25.817767 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-19 03:14:25.817776 | orchestrator | Thursday 19 March 2026 03:14:17 +0000 (0:00:05.817) 0:02:18.047 ******** 2026-03-19 03:14:25.817785 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:14:25.817794 | orchestrator | 2026-03-19 03:14:25.817803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:14:25.817812 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:14:25.817823 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:14:25.817829 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:14:25.817835 | orchestrator | 2026-03-19 03:14:25.817848 | orchestrator | 2026-03-19 03:14:25.817854 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:14:25.817861 | orchestrator | Thursday 19 March 2026 03:14:25 +0000 (0:00:08.289) 0:02:26.336 ******** 2026-03-19 03:14:25.817866 | orchestrator | =============================================================================== 2026-03-19 03:14:25.817872 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.27s 2026-03-19 03:14:25.817878 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.07s 2026-03-19 03:14:25.817899 | orchestrator | designate : Restart designate-central container ------------------------- 8.60s 2026-03-19 03:14:25.817911 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.29s 2026-03-19 03:14:25.817917 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.54s 2026-03-19 03:14:25.817923 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.03s 2026-03-19 03:14:25.817929 | orchestrator | designate : Copying over config.json files for services ----------------- 6.31s 2026-03-19 03:14:25.817935 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.84s 2026-03-19 03:14:25.817941 | orchestrator | designate : Restart designate-producer container ------------------------ 5.83s 2026-03-19 03:14:25.817946 | orchestrator | designate : Restart designate-worker container -------------------------- 5.82s 2026-03-19 03:14:25.817953 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.69s 2026-03-19 03:14:25.817963 | orchestrator | designate : Restart designate-api container ----------------------------- 5.58s 2026-03-19 03:14:25.817976 | orchestrator | designate : Check designate containers ---------------------------------- 4.98s 2026-03-19 03:14:25.817984 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.19s 2026-03-19 03:14:25.817992 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.93s 2026-03-19 03:14:25.818000 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.69s 2026-03-19 03:14:25.818008 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.62s 2026-03-19 03:14:25.818078 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.42s 2026-03-19 03:14:25.818090 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.32s 2026-03-19 03:14:25.818099 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.94s 2026-03-19 03:14:28.071916 | orchestrator | 2026-03-19 03:14:28 | INFO  | Task ea78fc23-bc40-4bc2-bcff-bd1bcf9ed61b (octavia) was prepared for execution. 2026-03-19 03:14:28.072011 | orchestrator | 2026-03-19 03:14:28 | INFO  | It takes a moment until task ea78fc23-bc40-4bc2-bcff-bd1bcf9ed61b (octavia) has been started and output is visible here. 2026-03-19 03:16:43.493087 | orchestrator | 2026-03-19 03:16:43.493190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:16:43.493201 | orchestrator | 2026-03-19 03:16:43.493208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:16:43.493215 | orchestrator | Thursday 19 March 2026 03:14:32 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-03-19 03:16:43.493222 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:16:43.493229 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:16:43.493235 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:16:43.493241 | orchestrator | 2026-03-19 03:16:43.493248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:16:43.493254 | orchestrator | Thursday 19 March 2026 03:14:32 +0000 (0:00:00.320) 0:00:00.579 ******** 2026-03-19 03:16:43.493260 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-19 03:16:43.493267 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-19 03:16:43.493274 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-19 03:16:43.493280 | orchestrator | 2026-03-19 03:16:43.493287 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-19 03:16:43.493314 | orchestrator | 2026-03-19 03:16:43.493321 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:16:43.493327 | orchestrator | Thursday 19 March 2026 03:14:32 +0000 (0:00:00.425) 0:00:01.004 ******** 2026-03-19 03:16:43.493334 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:16:43.493341 | orchestrator | 2026-03-19 03:16:43.493348 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-19 03:16:43.493444 | orchestrator | Thursday 19 March 2026 03:14:33 +0000 (0:00:00.577) 0:00:01.581 ******** 2026-03-19 03:16:43.493461 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-19 03:16:43.493470 | orchestrator | 2026-03-19 03:16:43.493480 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-19 03:16:43.493489 | orchestrator | Thursday 19 March 2026 03:14:37 +0000 (0:00:04.025) 0:00:05.607 ******** 2026-03-19 03:16:43.493499 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-19 03:16:43.493510 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-19 03:16:43.493522 | orchestrator | 2026-03-19 03:16:43.493532 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-19 03:16:43.493543 | orchestrator | Thursday 19 March 2026 03:14:44 +0000 (0:00:06.966) 0:00:12.573 ******** 2026-03-19 03:16:43.493553 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:16:43.493565 | orchestrator | 2026-03-19 03:16:43.493576 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-19 03:16:43.493586 | orchestrator | Thursday 19 March 2026 03:14:48 +0000 (0:00:03.701) 0:00:16.275 ******** 2026-03-19 03:16:43.493596 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:16:43.493607 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-19 03:16:43.493619 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-19 03:16:43.493629 | orchestrator | 2026-03-19 03:16:43.493640 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-19 03:16:43.493652 | orchestrator | Thursday 19 March 2026 03:14:57 +0000 (0:00:08.898) 0:00:25.174 ******** 2026-03-19 03:16:43.493665 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:16:43.493677 | orchestrator | 2026-03-19 03:16:43.493708 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-19 03:16:43.493716 | orchestrator | Thursday 19 March 2026 03:15:00 +0000 (0:00:03.598) 0:00:28.772 ******** 2026-03-19 03:16:43.493724 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-19 03:16:43.493731 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-19 03:16:43.493738 | orchestrator | 2026-03-19 03:16:43.493745 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-19 03:16:43.493752 | orchestrator | Thursday 19 March 2026 03:15:08 +0000 (0:00:07.774) 0:00:36.547 ******** 2026-03-19 03:16:43.493760 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-19 03:16:43.493767 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-19 03:16:43.493773 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-19 03:16:43.493779 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-19 03:16:43.493785 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-19 03:16:43.493792 | orchestrator | 2026-03-19 03:16:43.493800 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:16:43.493810 | orchestrator | Thursday 19 March 2026 03:15:25 +0000 (0:00:17.123) 0:00:53.670 ******** 2026-03-19 03:16:43.493822 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:16:43.493849 | orchestrator | 2026-03-19 03:16:43.493859 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-19 03:16:43.493869 | orchestrator | Thursday 19 March 2026 03:15:26 +0000 (0:00:00.746) 0:00:54.417 ******** 2026-03-19 03:16:43.493878 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.493889 | orchestrator | 2026-03-19 03:16:43.493897 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-19 03:16:43.493907 | orchestrator | Thursday 19 March 2026 03:15:31 +0000 (0:00:04.848) 0:00:59.265 ******** 2026-03-19 03:16:43.493918 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.493927 | orchestrator | 2026-03-19 03:16:43.493937 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-19 03:16:43.493966 | orchestrator | Thursday 19 March 2026 03:15:35 +0000 (0:00:04.747) 0:01:04.012 ******** 2026-03-19 03:16:43.493978 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:16:43.493988 | orchestrator | 2026-03-19 03:16:43.493998 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-19 03:16:43.494007 | orchestrator | Thursday 19 March 2026 03:15:39 +0000 (0:00:03.398) 0:01:07.410 ******** 2026-03-19 03:16:43.494085 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-19 03:16:43.494100 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-19 03:16:43.494112 | orchestrator | 2026-03-19 03:16:43.494123 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-19 03:16:43.494133 | orchestrator | Thursday 19 March 2026 03:15:50 +0000 (0:00:10.663) 0:01:18.073 ******** 2026-03-19 03:16:43.494143 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-19 03:16:43.494154 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-19 03:16:43.494166 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-19 03:16:43.494178 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-19 03:16:43.494188 | orchestrator | 2026-03-19 03:16:43.494199 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-19 03:16:43.494214 | orchestrator | Thursday 19 March 2026 03:16:07 +0000 (0:00:17.978) 0:01:36.051 ******** 2026-03-19 03:16:43.494225 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494236 | orchestrator | 2026-03-19 03:16:43.494246 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-19 03:16:43.494256 | orchestrator | Thursday 19 March 2026 03:16:12 +0000 (0:00:04.912) 0:01:40.964 ******** 2026-03-19 03:16:43.494266 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494277 | orchestrator | 2026-03-19 03:16:43.494288 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-19 03:16:43.494298 | orchestrator | Thursday 19 March 2026 03:16:18 +0000 (0:00:05.749) 0:01:46.714 ******** 2026-03-19 03:16:43.494308 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:16:43.494318 | orchestrator | 2026-03-19 03:16:43.494329 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-19 03:16:43.494339 | orchestrator | Thursday 19 March 2026 03:16:18 +0000 (0:00:00.231) 0:01:46.945 ******** 2026-03-19 03:16:43.494350 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:16:43.494383 | orchestrator | 2026-03-19 03:16:43.494394 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:16:43.494405 | orchestrator | Thursday 19 March 2026 03:16:24 +0000 (0:00:05.303) 0:01:52.249 ******** 2026-03-19 03:16:43.494415 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:16:43.494426 | orchestrator | 2026-03-19 03:16:43.494436 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-19 03:16:43.494456 | orchestrator | Thursday 19 March 2026 03:16:25 +0000 (0:00:01.121) 0:01:53.370 ******** 2026-03-19 03:16:43.494466 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494476 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494486 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494496 | orchestrator | 2026-03-19 03:16:43.494514 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-19 03:16:43.494524 | orchestrator | Thursday 19 March 2026 03:16:31 +0000 (0:00:05.707) 0:01:59.077 ******** 2026-03-19 03:16:43.494534 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494545 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494555 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494565 | orchestrator | 2026-03-19 03:16:43.494575 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-19 03:16:43.494586 | orchestrator | Thursday 19 March 2026 03:16:35 +0000 (0:00:04.707) 0:02:03.785 ******** 2026-03-19 03:16:43.494596 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494606 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494617 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494627 | orchestrator | 2026-03-19 03:16:43.494637 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-19 03:16:43.494647 | orchestrator | Thursday 19 March 2026 03:16:36 +0000 (0:00:01.042) 0:02:04.828 ******** 2026-03-19 03:16:43.494657 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:16:43.494667 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:16:43.494677 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:16:43.494688 | orchestrator | 2026-03-19 03:16:43.494698 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-19 03:16:43.494709 | orchestrator | Thursday 19 March 2026 03:16:38 +0000 (0:00:01.837) 0:02:06.665 ******** 2026-03-19 03:16:43.494719 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494730 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494740 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494750 | orchestrator | 2026-03-19 03:16:43.494760 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-19 03:16:43.494770 | orchestrator | Thursday 19 March 2026 03:16:39 +0000 (0:00:01.260) 0:02:07.925 ******** 2026-03-19 03:16:43.494780 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494790 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494801 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494812 | orchestrator | 2026-03-19 03:16:43.494822 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-19 03:16:43.494832 | orchestrator | Thursday 19 March 2026 03:16:41 +0000 (0:00:01.199) 0:02:09.125 ******** 2026-03-19 03:16:43.494842 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:16:43.494852 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:16:43.494863 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:16:43.494873 | orchestrator | 2026-03-19 03:16:43.494893 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-19 03:17:11.244244 | orchestrator | Thursday 19 March 2026 03:16:43 +0000 (0:00:02.412) 0:02:11.538 ******** 2026-03-19 03:17:11.244427 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:17:11.244449 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:17:11.244460 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:17:11.244469 | orchestrator | 2026-03-19 03:17:11.244478 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-19 03:17:11.244489 | orchestrator | Thursday 19 March 2026 03:16:45 +0000 (0:00:01.655) 0:02:13.193 ******** 2026-03-19 03:17:11.244499 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244509 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:17:11.244519 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:17:11.244529 | orchestrator | 2026-03-19 03:17:11.244539 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-19 03:17:11.244549 | orchestrator | Thursday 19 March 2026 03:16:45 +0000 (0:00:00.673) 0:02:13.867 ******** 2026-03-19 03:17:11.244588 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:17:11.244608 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:17:11.244618 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244628 | orchestrator | 2026-03-19 03:17:11.244637 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:17:11.244646 | orchestrator | Thursday 19 March 2026 03:16:48 +0000 (0:00:03.076) 0:02:16.943 ******** 2026-03-19 03:17:11.244657 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:17:11.244666 | orchestrator | 2026-03-19 03:17:11.244674 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-19 03:17:11.244683 | orchestrator | Thursday 19 March 2026 03:16:49 +0000 (0:00:00.550) 0:02:17.493 ******** 2026-03-19 03:17:11.244691 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244700 | orchestrator | 2026-03-19 03:17:11.244709 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-19 03:17:11.244719 | orchestrator | Thursday 19 March 2026 03:16:53 +0000 (0:00:04.253) 0:02:21.746 ******** 2026-03-19 03:17:11.244728 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244737 | orchestrator | 2026-03-19 03:17:11.244746 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-19 03:17:11.244756 | orchestrator | Thursday 19 March 2026 03:16:57 +0000 (0:00:03.431) 0:02:25.178 ******** 2026-03-19 03:17:11.244764 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-19 03:17:11.244775 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-19 03:17:11.244784 | orchestrator | 2026-03-19 03:17:11.244795 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-19 03:17:11.244806 | orchestrator | Thursday 19 March 2026 03:17:04 +0000 (0:00:07.419) 0:02:32.598 ******** 2026-03-19 03:17:11.244815 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244824 | orchestrator | 2026-03-19 03:17:11.244833 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-19 03:17:11.244843 | orchestrator | Thursday 19 March 2026 03:17:08 +0000 (0:00:04.167) 0:02:36.766 ******** 2026-03-19 03:17:11.244853 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:17:11.244863 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:17:11.244873 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:17:11.244882 | orchestrator | 2026-03-19 03:17:11.244892 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-19 03:17:11.244902 | orchestrator | Thursday 19 March 2026 03:17:09 +0000 (0:00:00.557) 0:02:37.323 ******** 2026-03-19 03:17:11.244934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:11.244969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:11.244991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:11.245002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:11.245013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:11.245028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:11.245038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:11.245048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:11.245074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:12.690996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:12.691018 | orchestrator | 2026-03-19 03:17:12.691039 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-19 03:17:12.691061 | orchestrator | Thursday 19 March 2026 03:17:11 +0000 (0:00:02.407) 0:02:39.731 ******** 2026-03-19 03:17:12.691081 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:17:12.691102 | orchestrator | 2026-03-19 03:17:12.691121 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-19 03:17:12.691140 | orchestrator | Thursday 19 March 2026 03:17:11 +0000 (0:00:00.132) 0:02:39.864 ******** 2026-03-19 03:17:12.691156 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:17:12.691195 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:17:12.691207 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:17:12.691218 | orchestrator | 2026-03-19 03:17:12.691229 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-19 03:17:12.691240 | orchestrator | Thursday 19 March 2026 03:17:12 +0000 (0:00:00.343) 0:02:40.208 ******** 2026-03-19 03:17:12.691252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:12.691266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:12.691287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:12.691299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:12.691320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:12.691332 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:17:12.691386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:17.589637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:17.589754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:17.589791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:17.589848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:17.589863 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:17:17.589878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:17.589894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:17.589927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:17.589941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:17.589959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:17.589981 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:17:17.589994 | orchestrator | 2026-03-19 03:17:17.590008 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:17:17.590086 | orchestrator | Thursday 19 March 2026 03:17:12 +0000 (0:00:00.633) 0:02:40.841 ******** 2026-03-19 03:17:17.590101 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:17:17.590114 | orchestrator | 2026-03-19 03:17:17.590128 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-19 03:17:17.590141 | orchestrator | Thursday 19 March 2026 03:17:13 +0000 (0:00:00.731) 0:02:41.573 ******** 2026-03-19 03:17:17.590155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:17.590170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:17.590193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:19.182159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:19.182272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:19.182279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:19.182284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:19.182370 | orchestrator | 2026-03-19 03:17:19.182376 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-19 03:17:19.182382 | orchestrator | Thursday 19 March 2026 03:17:18 +0000 (0:00:05.096) 0:02:46.669 ******** 2026-03-19 03:17:19.182391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:19.293279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:19.293499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:19.293530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:19.293551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:19.293569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:17:19.293591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:19.293637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:19.293716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:19.293745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:19.293762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:19.293773 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:17:19.293786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:19.293797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:19.293808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:19.293836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:20.035585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:20.035706 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:17:20.035719 | orchestrator | 2026-03-19 03:17:20.035728 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-19 03:17:20.035736 | orchestrator | Thursday 19 March 2026 03:17:19 +0000 (0:00:00.680) 0:02:47.350 ******** 2026-03-19 03:17:20.035746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:20.035755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:20.035763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:20.035794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:20.035817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:20.035825 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:17:20.035848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:20.035857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:20.035864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:20.035871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:20.035885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:20.035892 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:17:20.035910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 03:17:24.790471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 03:17:24.790589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 03:17:24.790601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 03:17:24.790609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 03:17:24.790640 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:17:24.790649 | orchestrator | 2026-03-19 03:17:24.790656 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-19 03:17:24.790664 | orchestrator | Thursday 19 March 2026 03:17:20 +0000 (0:00:01.251) 0:02:48.601 ******** 2026-03-19 03:17:24.790671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:24.790713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:24.790732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:24.790742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:24.790752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:24.790773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:24.790784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:24.790805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:40.752545 | orchestrator | 2026-03-19 03:17:40.752553 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-19 03:17:40.752562 | orchestrator | Thursday 19 March 2026 03:17:25 +0000 (0:00:05.353) 0:02:53.955 ******** 2026-03-19 03:17:40.752569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 03:17:40.752577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 03:17:40.752584 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-19 03:17:40.752590 | orchestrator | 2026-03-19 03:17:40.752597 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-19 03:17:40.752603 | orchestrator | Thursday 19 March 2026 03:17:27 +0000 (0:00:01.573) 0:02:55.528 ******** 2026-03-19 03:17:40.752610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:40.752623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:40.752630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:17:40.752646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:56.050955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:56.051063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:17:56.051099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:17:56.051208 | orchestrator | 2026-03-19 03:17:56.051216 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-19 03:17:56.051224 | orchestrator | Thursday 19 March 2026 03:17:43 +0000 (0:00:16.465) 0:03:11.993 ******** 2026-03-19 03:17:56.051231 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:17:56.051239 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:17:56.051246 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:17:56.051252 | orchestrator | 2026-03-19 03:17:56.051258 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-19 03:17:56.051265 | orchestrator | Thursday 19 March 2026 03:17:45 +0000 (0:00:01.763) 0:03:13.757 ******** 2026-03-19 03:17:56.051272 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 03:17:56.051278 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 03:17:56.051285 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 03:17:56.051291 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 03:17:56.051297 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 03:17:56.051304 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 03:17:56.051310 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 03:17:56.051317 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 03:17:56.051389 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 03:17:56.051396 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 03:17:56.051403 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 03:17:56.051409 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 03:17:56.051415 | orchestrator | 2026-03-19 03:17:56.051426 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-19 03:17:56.051433 | orchestrator | Thursday 19 March 2026 03:17:50 +0000 (0:00:05.137) 0:03:18.895 ******** 2026-03-19 03:17:56.051439 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 03:17:56.051445 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 03:17:56.051464 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 03:18:04.587805 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.587916 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.587929 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.587939 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.587948 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.587957 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.587966 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 03:18:04.587975 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 03:18:04.587984 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 03:18:04.587997 | orchestrator | 2026-03-19 03:18:04.588014 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-19 03:18:04.588037 | orchestrator | Thursday 19 March 2026 03:17:56 +0000 (0:00:05.204) 0:03:24.100 ******** 2026-03-19 03:18:04.588055 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-19 03:18:04.588070 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-19 03:18:04.588084 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-19 03:18:04.588099 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.588113 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.588129 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-19 03:18:04.588144 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.588159 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.588174 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-19 03:18:04.588190 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-19 03:18:04.588204 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-19 03:18:04.588220 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-19 03:18:04.588235 | orchestrator | 2026-03-19 03:18:04.588250 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-19 03:18:04.588266 | orchestrator | Thursday 19 March 2026 03:18:01 +0000 (0:00:05.226) 0:03:29.326 ******** 2026-03-19 03:18:04.588286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:18:04.588308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:18:04.588446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 03:18:04.588469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:18:04.588488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:18:04.588505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-19 03:18:04.588520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:18:04.588536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:18:04.588561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-19 03:18:04.588581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:12.522541 | orchestrator | 2026-03-19 03:19:12.522550 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-19 03:19:12.522559 | orchestrator | Thursday 19 March 2026 03:18:05 +0000 (0:00:04.094) 0:03:33.420 ******** 2026-03-19 03:19:12.522566 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:12.522590 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:12.522597 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:12.522604 | orchestrator | 2026-03-19 03:19:12.522611 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-19 03:19:12.522617 | orchestrator | Thursday 19 March 2026 03:18:05 +0000 (0:00:00.530) 0:03:33.951 ******** 2026-03-19 03:19:12.522623 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522629 | orchestrator | 2026-03-19 03:19:12.522636 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-19 03:19:12.522643 | orchestrator | Thursday 19 March 2026 03:18:08 +0000 (0:00:02.183) 0:03:36.135 ******** 2026-03-19 03:19:12.522650 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522656 | orchestrator | 2026-03-19 03:19:12.522663 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-19 03:19:12.522670 | orchestrator | Thursday 19 March 2026 03:18:10 +0000 (0:00:02.295) 0:03:38.430 ******** 2026-03-19 03:19:12.522677 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522684 | orchestrator | 2026-03-19 03:19:12.522690 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-19 03:19:12.522698 | orchestrator | Thursday 19 March 2026 03:18:12 +0000 (0:00:02.380) 0:03:40.811 ******** 2026-03-19 03:19:12.522720 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522727 | orchestrator | 2026-03-19 03:19:12.522732 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-19 03:19:12.522738 | orchestrator | Thursday 19 March 2026 03:18:15 +0000 (0:00:02.467) 0:03:43.278 ******** 2026-03-19 03:19:12.522744 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522750 | orchestrator | 2026-03-19 03:19:12.522757 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 03:19:12.522763 | orchestrator | Thursday 19 March 2026 03:18:38 +0000 (0:00:23.281) 0:04:06.560 ******** 2026-03-19 03:19:12.522768 | orchestrator | 2026-03-19 03:19:12.522774 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 03:19:12.522780 | orchestrator | Thursday 19 March 2026 03:18:38 +0000 (0:00:00.066) 0:04:06.627 ******** 2026-03-19 03:19:12.522787 | orchestrator | 2026-03-19 03:19:12.522793 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-19 03:19:12.522799 | orchestrator | Thursday 19 March 2026 03:18:38 +0000 (0:00:00.065) 0:04:06.693 ******** 2026-03-19 03:19:12.522806 | orchestrator | 2026-03-19 03:19:12.522813 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-19 03:19:12.522819 | orchestrator | Thursday 19 March 2026 03:18:38 +0000 (0:00:00.066) 0:04:06.760 ******** 2026-03-19 03:19:12.522825 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522833 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:19:12.522840 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:19:12.522847 | orchestrator | 2026-03-19 03:19:12.522853 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-19 03:19:12.522868 | orchestrator | Thursday 19 March 2026 03:18:49 +0000 (0:00:11.074) 0:04:17.834 ******** 2026-03-19 03:19:12.522876 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522882 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:19:12.522888 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:19:12.522895 | orchestrator | 2026-03-19 03:19:12.522902 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-19 03:19:12.522909 | orchestrator | Thursday 19 March 2026 03:18:55 +0000 (0:00:06.207) 0:04:24.042 ******** 2026-03-19 03:19:12.522916 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522923 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:19:12.522929 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:19:12.522936 | orchestrator | 2026-03-19 03:19:12.522943 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-19 03:19:12.522950 | orchestrator | Thursday 19 March 2026 03:19:01 +0000 (0:00:05.346) 0:04:29.388 ******** 2026-03-19 03:19:12.522956 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522962 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:19:12.522969 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:19:12.522975 | orchestrator | 2026-03-19 03:19:12.522981 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-19 03:19:12.522987 | orchestrator | Thursday 19 March 2026 03:19:06 +0000 (0:00:05.489) 0:04:34.878 ******** 2026-03-19 03:19:12.522993 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:19:12.522999 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:19:12.523005 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:19:12.523012 | orchestrator | 2026-03-19 03:19:12.523017 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:19:12.523025 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:19:12.523034 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:19:12.523040 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:19:12.523046 | orchestrator | 2026-03-19 03:19:12.523052 | orchestrator | 2026-03-19 03:19:12.523058 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:19:12.523065 | orchestrator | Thursday 19 March 2026 03:19:12 +0000 (0:00:05.680) 0:04:40.559 ******** 2026-03-19 03:19:12.523071 | orchestrator | =============================================================================== 2026-03-19 03:19:12.523078 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.28s 2026-03-19 03:19:12.523084 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.98s 2026-03-19 03:19:12.523091 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.12s 2026-03-19 03:19:12.523097 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.47s 2026-03-19 03:19:12.523112 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.07s 2026-03-19 03:19:12.523118 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.66s 2026-03-19 03:19:12.523125 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.90s 2026-03-19 03:19:12.523131 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2026-03-19 03:19:12.523158 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.42s 2026-03-19 03:19:12.523164 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.97s 2026-03-19 03:19:12.523171 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.21s 2026-03-19 03:19:12.523177 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.75s 2026-03-19 03:19:12.523190 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.71s 2026-03-19 03:19:12.523197 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.68s 2026-03-19 03:19:12.523210 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.49s 2026-03-19 03:19:12.875406 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.35s 2026-03-19 03:19:12.875496 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.35s 2026-03-19 03:19:12.875503 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.30s 2026-03-19 03:19:12.875508 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.23s 2026-03-19 03:19:12.875512 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.20s 2026-03-19 03:19:15.266710 | orchestrator | 2026-03-19 03:19:15 | INFO  | Task ab998fbb-e416-471a-8730-fd39a9cbbfd3 (ceilometer) was prepared for execution. 2026-03-19 03:19:15.266822 | orchestrator | 2026-03-19 03:19:15 | INFO  | It takes a moment until task ab998fbb-e416-471a-8730-fd39a9cbbfd3 (ceilometer) has been started and output is visible here. 2026-03-19 03:19:39.616753 | orchestrator | 2026-03-19 03:19:39.616932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:19:39.616954 | orchestrator | 2026-03-19 03:19:39.616966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:19:39.616979 | orchestrator | Thursday 19 March 2026 03:19:19 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-03-19 03:19:39.616990 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:19:39.617003 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:19:39.617015 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:19:39.617025 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:19:39.617036 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:19:39.617047 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:19:39.617057 | orchestrator | 2026-03-19 03:19:39.617069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:19:39.617080 | orchestrator | Thursday 19 March 2026 03:19:20 +0000 (0:00:00.697) 0:00:00.962 ******** 2026-03-19 03:19:39.617091 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617103 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617114 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617124 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617135 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617146 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-19 03:19:39.617157 | orchestrator | 2026-03-19 03:19:39.617168 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-19 03:19:39.617178 | orchestrator | 2026-03-19 03:19:39.617189 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-19 03:19:39.617200 | orchestrator | Thursday 19 March 2026 03:19:20 +0000 (0:00:00.594) 0:00:01.556 ******** 2026-03-19 03:19:39.617213 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:19:39.617225 | orchestrator | 2026-03-19 03:19:39.617236 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-19 03:19:39.617247 | orchestrator | Thursday 19 March 2026 03:19:22 +0000 (0:00:01.185) 0:00:02.741 ******** 2026-03-19 03:19:39.617258 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:39.617269 | orchestrator | 2026-03-19 03:19:39.617309 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-19 03:19:39.617324 | orchestrator | Thursday 19 March 2026 03:19:22 +0000 (0:00:00.133) 0:00:02.874 ******** 2026-03-19 03:19:39.617336 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:39.617349 | orchestrator | 2026-03-19 03:19:39.617390 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-19 03:19:39.617403 | orchestrator | Thursday 19 March 2026 03:19:22 +0000 (0:00:00.134) 0:00:03.009 ******** 2026-03-19 03:19:39.617415 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:19:39.617428 | orchestrator | 2026-03-19 03:19:39.617441 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-19 03:19:39.617453 | orchestrator | Thursday 19 March 2026 03:19:26 +0000 (0:00:03.696) 0:00:06.705 ******** 2026-03-19 03:19:39.617466 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:19:39.617477 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-19 03:19:39.617489 | orchestrator | 2026-03-19 03:19:39.617502 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-19 03:19:39.617515 | orchestrator | Thursday 19 March 2026 03:19:30 +0000 (0:00:04.169) 0:00:10.875 ******** 2026-03-19 03:19:39.617527 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:19:39.617539 | orchestrator | 2026-03-19 03:19:39.617568 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-19 03:19:39.617581 | orchestrator | Thursday 19 March 2026 03:19:33 +0000 (0:00:03.542) 0:00:14.418 ******** 2026-03-19 03:19:39.617593 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-19 03:19:39.617605 | orchestrator | 2026-03-19 03:19:39.617617 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-19 03:19:39.617629 | orchestrator | Thursday 19 March 2026 03:19:38 +0000 (0:00:04.295) 0:00:18.714 ******** 2026-03-19 03:19:39.617641 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:39.617652 | orchestrator | 2026-03-19 03:19:39.617663 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-19 03:19:39.617674 | orchestrator | Thursday 19 March 2026 03:19:38 +0000 (0:00:00.138) 0:00:18.853 ******** 2026-03-19 03:19:39.617688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:39.617724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:39.617738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:39.617750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:39.617780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:39.617802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:39.617823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:39.617854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:44.222731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:44.222841 | orchestrator | 2026-03-19 03:19:44.222848 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-19 03:19:44.222854 | orchestrator | Thursday 19 March 2026 03:19:39 +0000 (0:00:01.446) 0:00:20.299 ******** 2026-03-19 03:19:44.222858 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:19:44.222862 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:19:44.222867 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:19:44.222870 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:19:44.222874 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:19:44.222878 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:19:44.222882 | orchestrator | 2026-03-19 03:19:44.222886 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-19 03:19:44.222891 | orchestrator | Thursday 19 March 2026 03:19:41 +0000 (0:00:01.598) 0:00:21.897 ******** 2026-03-19 03:19:44.222895 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:19:44.222900 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:19:44.222903 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:19:44.222907 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:19:44.222911 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:19:44.222914 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:19:44.222918 | orchestrator | 2026-03-19 03:19:44.222922 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-19 03:19:44.222926 | orchestrator | Thursday 19 March 2026 03:19:41 +0000 (0:00:00.594) 0:00:22.492 ******** 2026-03-19 03:19:44.222930 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:44.222934 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:44.222938 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:44.222942 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:44.222945 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:44.222949 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:44.222953 | orchestrator | 2026-03-19 03:19:44.222957 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-19 03:19:44.222961 | orchestrator | Thursday 19 March 2026 03:19:42 +0000 (0:00:00.742) 0:00:23.235 ******** 2026-03-19 03:19:44.222965 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:19:44.222969 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:19:44.222973 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:19:44.222976 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:19:44.222980 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:19:44.223034 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:19:44.223042 | orchestrator | 2026-03-19 03:19:44.223048 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-19 03:19:44.223057 | orchestrator | Thursday 19 March 2026 03:19:43 +0000 (0:00:00.631) 0:00:23.866 ******** 2026-03-19 03:19:44.223064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:44.223071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:44.223100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:44.223106 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:44.223112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:44.223119 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:44.223124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:44.223131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:44.223141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:44.223148 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:44.223154 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:44.223160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:44.223172 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:44.223184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924238 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:48.924411 | orchestrator | 2026-03-19 03:19:48.924428 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-19 03:19:48.924438 | orchestrator | Thursday 19 March 2026 03:19:44 +0000 (0:00:01.043) 0:00:24.909 ******** 2026-03-19 03:19:48.924450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:48.924474 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:48.924501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:48.924545 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:48.924552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:48.924562 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:48.924581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924588 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:48.924593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924598 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:48.924607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:48.924613 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:48.924622 | orchestrator | 2026-03-19 03:19:48.924629 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-19 03:19:48.924635 | orchestrator | Thursday 19 March 2026 03:19:45 +0000 (0:00:00.824) 0:00:25.734 ******** 2026-03-19 03:19:48.924641 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:19:48.924646 | orchestrator | 2026-03-19 03:19:48.924651 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-19 03:19:48.924657 | orchestrator | Thursday 19 March 2026 03:19:45 +0000 (0:00:00.723) 0:00:26.458 ******** 2026-03-19 03:19:48.924662 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:19:48.924668 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:19:48.924672 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:19:48.924677 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:19:48.924682 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:19:48.924687 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:19:48.924692 | orchestrator | 2026-03-19 03:19:48.924696 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-19 03:19:48.924701 | orchestrator | Thursday 19 March 2026 03:19:46 +0000 (0:00:00.770) 0:00:27.228 ******** 2026-03-19 03:19:48.924706 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:19:48.924711 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:19:48.924715 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:19:48.924720 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:19:48.924725 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:19:48.924730 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:19:48.924734 | orchestrator | 2026-03-19 03:19:48.924739 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-19 03:19:48.924744 | orchestrator | Thursday 19 March 2026 03:19:47 +0000 (0:00:00.977) 0:00:28.206 ******** 2026-03-19 03:19:48.924749 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:48.924754 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:48.924758 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:48.924763 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:48.924768 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:48.924773 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:48.924777 | orchestrator | 2026-03-19 03:19:48.924782 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-19 03:19:48.924787 | orchestrator | Thursday 19 March 2026 03:19:48 +0000 (0:00:00.815) 0:00:29.022 ******** 2026-03-19 03:19:48.924793 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:48.924799 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:48.924804 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:48.924810 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:48.924816 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:48.924821 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:48.924827 | orchestrator | 2026-03-19 03:19:53.697970 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-19 03:19:53.698092 | orchestrator | Thursday 19 March 2026 03:19:48 +0000 (0:00:00.599) 0:00:29.621 ******** 2026-03-19 03:19:53.698102 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:19:53.698108 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:19:53.698112 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:19:53.698116 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:19:53.698120 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:19:53.698123 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:19:53.698127 | orchestrator | 2026-03-19 03:19:53.698132 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-19 03:19:53.698136 | orchestrator | Thursday 19 March 2026 03:19:50 +0000 (0:00:01.410) 0:00:31.032 ******** 2026-03-19 03:19:53.698143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:53.698178 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:53.698193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:53.698207 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:53.698213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:53.698239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698248 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:53.698252 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:53.698259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698329 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:53.698345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:53.698352 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:53.698357 | orchestrator | 2026-03-19 03:19:53.698364 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-19 03:19:53.698371 | orchestrator | Thursday 19 March 2026 03:19:51 +0000 (0:00:00.831) 0:00:31.863 ******** 2026-03-19 03:19:53.698377 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:53.698383 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:53.698389 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:53.698394 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:53.698400 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:53.698406 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:53.698412 | orchestrator | 2026-03-19 03:19:53.698419 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-19 03:19:53.698429 | orchestrator | Thursday 19 March 2026 03:19:51 +0000 (0:00:00.778) 0:00:32.641 ******** 2026-03-19 03:19:53.698436 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:19:53.698442 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:19:53.698448 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:19:53.698454 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:19:53.698459 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:19:53.698465 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:19:53.698472 | orchestrator | 2026-03-19 03:19:53.698478 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-19 03:19:53.698485 | orchestrator | Thursday 19 March 2026 03:19:53 +0000 (0:00:01.347) 0:00:33.989 ******** 2026-03-19 03:19:53.698501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:59.421389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:59.421402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:59.421437 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:59.421444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:59.421458 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:59.421465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421497 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:59.421520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421526 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:59.421532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.421538 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:59.421545 | orchestrator | 2026-03-19 03:19:59.421552 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-19 03:19:59.421560 | orchestrator | Thursday 19 March 2026 03:19:54 +0000 (0:00:01.030) 0:00:35.019 ******** 2026-03-19 03:19:59.421566 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:59.421572 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:59.421579 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:59.421589 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:59.421595 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:59.421602 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:59.421608 | orchestrator | 2026-03-19 03:19:59.421614 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-19 03:19:59.421621 | orchestrator | Thursday 19 March 2026 03:19:55 +0000 (0:00:00.742) 0:00:35.761 ******** 2026-03-19 03:19:59.421627 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:59.421634 | orchestrator | 2026-03-19 03:19:59.421640 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-19 03:19:59.421647 | orchestrator | Thursday 19 March 2026 03:19:55 +0000 (0:00:00.146) 0:00:35.908 ******** 2026-03-19 03:19:59.421654 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:59.421660 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:59.421667 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:19:59.421673 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:19:59.421679 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:19:59.421685 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:19:59.421691 | orchestrator | 2026-03-19 03:19:59.421698 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-19 03:19:59.421710 | orchestrator | Thursday 19 March 2026 03:19:55 +0000 (0:00:00.603) 0:00:36.512 ******** 2026-03-19 03:19:59.421718 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:19:59.421725 | orchestrator | 2026-03-19 03:19:59.421732 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-19 03:19:59.421738 | orchestrator | Thursday 19 March 2026 03:19:57 +0000 (0:00:01.323) 0:00:37.835 ******** 2026-03-19 03:19:59.421744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.421756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.901742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.901848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.901877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.901911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:19:59.901920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:59.901930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:59.901952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:19:59.901961 | orchestrator | 2026-03-19 03:19:59.901970 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-19 03:19:59.901978 | orchestrator | Thursday 19 March 2026 03:19:59 +0000 (0:00:02.277) 0:00:40.112 ******** 2026-03-19 03:19:59.901987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.901999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:59.902064 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:19:59.902079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.902090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:19:59.902102 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:19:59.902114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:19:59.902136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:01.725542 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:01.725634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725649 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:01.725677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725733 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:01.725743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725752 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:01.725760 | orchestrator | 2026-03-19 03:20:01.725768 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-19 03:20:01.725777 | orchestrator | Thursday 19 March 2026 03:20:00 +0000 (0:00:00.786) 0:00:40.899 ******** 2026-03-19 03:20:01.725787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:01.725823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:01.725848 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:01.725872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:01.725885 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:01.725890 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:01.725896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725901 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:01.725907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:01.725912 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:01.725925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:09.438556 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:09.438664 | orchestrator | 2026-03-19 03:20:09.438676 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-19 03:20:09.438686 | orchestrator | Thursday 19 March 2026 03:20:01 +0000 (0:00:01.511) 0:00:42.410 ******** 2026-03-19 03:20:09.438713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:09.438818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:09.438826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:09.438833 | orchestrator | 2026-03-19 03:20:09.438841 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-19 03:20:09.438849 | orchestrator | Thursday 19 March 2026 03:20:04 +0000 (0:00:02.676) 0:00:45.087 ******** 2026-03-19 03:20:09.438856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:09.438882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.779110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.779231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.779249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.779321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:18.779341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:18.779384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:18.779397 | orchestrator | 2026-03-19 03:20:18.779411 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-19 03:20:18.779424 | orchestrator | Thursday 19 March 2026 03:20:09 +0000 (0:00:05.042) 0:00:50.130 ******** 2026-03-19 03:20:18.779455 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:20:18.779468 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:20:18.779479 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:20:18.779490 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:20:18.779501 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:20:18.779512 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:20:18.779523 | orchestrator | 2026-03-19 03:20:18.779534 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-19 03:20:18.779545 | orchestrator | Thursday 19 March 2026 03:20:10 +0000 (0:00:01.431) 0:00:51.562 ******** 2026-03-19 03:20:18.779556 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:18.779567 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:18.779586 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:18.779597 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:18.779608 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:18.779618 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:18.779629 | orchestrator | 2026-03-19 03:20:18.779640 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-19 03:20:18.779651 | orchestrator | Thursday 19 March 2026 03:20:11 +0000 (0:00:00.567) 0:00:52.129 ******** 2026-03-19 03:20:18.779662 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:18.779673 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:18.779684 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:18.779695 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:20:18.779705 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:20:18.779716 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:20:18.779731 | orchestrator | 2026-03-19 03:20:18.779749 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-19 03:20:18.779772 | orchestrator | Thursday 19 March 2026 03:20:13 +0000 (0:00:01.662) 0:00:53.791 ******** 2026-03-19 03:20:18.779798 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:18.779816 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:18.779834 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:18.779852 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:20:18.779869 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:20:18.779887 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:20:18.779904 | orchestrator | 2026-03-19 03:20:18.779921 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-19 03:20:18.779937 | orchestrator | Thursday 19 March 2026 03:20:14 +0000 (0:00:01.510) 0:00:55.302 ******** 2026-03-19 03:20:18.779955 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:20:18.779971 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:20:18.779987 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:20:18.780004 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:20:18.780020 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:20:18.780037 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:20:18.780053 | orchestrator | 2026-03-19 03:20:18.780087 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-19 03:20:18.780104 | orchestrator | Thursday 19 March 2026 03:20:16 +0000 (0:00:01.627) 0:00:56.929 ******** 2026-03-19 03:20:18.780123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.780143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.780163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:18.780207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:19.589178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:19.589354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:19.589404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:19.589415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:19.590286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:19.590319 | orchestrator | 2026-03-19 03:20:19.590331 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-19 03:20:19.590342 | orchestrator | Thursday 19 March 2026 03:20:18 +0000 (0:00:02.538) 0:00:59.467 ******** 2026-03-19 03:20:19.590370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:19.590402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:19.590414 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:19.590424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:19.590447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:19.590456 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:19.590465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:19.590474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:19.590482 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:19.590491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:19.590500 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:19.590521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.002681 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:23.002783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.002796 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:23.002806 | orchestrator | 2026-03-19 03:20:23.002815 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-19 03:20:23.002824 | orchestrator | Thursday 19 March 2026 03:20:19 +0000 (0:00:00.817) 0:01:00.285 ******** 2026-03-19 03:20:23.002832 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:23.002840 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:23.002848 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:23.002856 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:23.002864 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:23.002872 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:23.002879 | orchestrator | 2026-03-19 03:20:23.002887 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-19 03:20:23.002896 | orchestrator | Thursday 19 March 2026 03:20:20 +0000 (0:00:00.813) 0:01:01.098 ******** 2026-03-19 03:20:23.002905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.002915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:23.002925 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:23.002933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.002959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:23.002992 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:23.003015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.003024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 03:20:23.003033 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:23.003047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.003060 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:23.003073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.003095 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:23.003116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-19 03:20:23.003139 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:23.003183 | orchestrator | 2026-03-19 03:20:23.003196 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-19 03:20:23.003208 | orchestrator | Thursday 19 March 2026 03:20:21 +0000 (0:00:00.835) 0:01:01.933 ******** 2026-03-19 03:20:23.003231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.904857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.904956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.904971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.904980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.905031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-19 03:20:46.905040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:46.905062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:46.905069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-19 03:20:46.905075 | orchestrator | 2026-03-19 03:20:46.905082 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-19 03:20:46.905089 | orchestrator | Thursday 19 March 2026 03:20:22 +0000 (0:00:01.760) 0:01:03.693 ******** 2026-03-19 03:20:46.905095 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:20:46.905102 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:20:46.905108 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:20:46.905114 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:20:46.905119 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:20:46.905125 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:20:46.905132 | orchestrator | 2026-03-19 03:20:46.905138 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-19 03:20:46.905144 | orchestrator | Thursday 19 March 2026 03:20:23 +0000 (0:00:00.600) 0:01:04.293 ******** 2026-03-19 03:20:46.905150 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:20:46.905156 | orchestrator | 2026-03-19 03:20:46.905161 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905167 | orchestrator | Thursday 19 March 2026 03:20:27 +0000 (0:00:04.324) 0:01:08.618 ******** 2026-03-19 03:20:46.905173 | orchestrator | 2026-03-19 03:20:46.905179 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905185 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.088) 0:01:08.706 ******** 2026-03-19 03:20:46.905198 | orchestrator | 2026-03-19 03:20:46.905204 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905210 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.078) 0:01:08.785 ******** 2026-03-19 03:20:46.905217 | orchestrator | 2026-03-19 03:20:46.905222 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905229 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.260) 0:01:09.046 ******** 2026-03-19 03:20:46.905236 | orchestrator | 2026-03-19 03:20:46.905241 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905299 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.071) 0:01:09.118 ******** 2026-03-19 03:20:46.905305 | orchestrator | 2026-03-19 03:20:46.905311 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-19 03:20:46.905318 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.071) 0:01:09.190 ******** 2026-03-19 03:20:46.905322 | orchestrator | 2026-03-19 03:20:46.905326 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-19 03:20:46.905329 | orchestrator | Thursday 19 March 2026 03:20:28 +0000 (0:00:00.070) 0:01:09.261 ******** 2026-03-19 03:20:46.905333 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:20:46.905337 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:20:46.905340 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:20:46.905344 | orchestrator | 2026-03-19 03:20:46.905348 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-19 03:20:46.905357 | orchestrator | Thursday 19 March 2026 03:20:36 +0000 (0:00:07.462) 0:01:16.723 ******** 2026-03-19 03:20:46.905361 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:20:46.905365 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:20:46.905369 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:20:46.905372 | orchestrator | 2026-03-19 03:20:46.905376 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-19 03:20:46.905380 | orchestrator | Thursday 19 March 2026 03:20:40 +0000 (0:00:04.597) 0:01:21.320 ******** 2026-03-19 03:20:46.905384 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:20:46.905387 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:20:46.905391 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:20:46.905395 | orchestrator | 2026-03-19 03:20:46.905399 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:20:46.905403 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-19 03:20:46.905409 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 03:20:46.905420 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 03:20:47.395610 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 03:20:47.395689 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 03:20:47.395696 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-19 03:20:47.395701 | orchestrator | 2026-03-19 03:20:47.395706 | orchestrator | 2026-03-19 03:20:47.395710 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:20:47.395715 | orchestrator | Thursday 19 March 2026 03:20:46 +0000 (0:00:06.269) 0:01:27.590 ******** 2026-03-19 03:20:47.395719 | orchestrator | =============================================================================== 2026-03-19 03:20:47.395749 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.46s 2026-03-19 03:20:47.395754 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.27s 2026-03-19 03:20:47.395758 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.04s 2026-03-19 03:20:47.395762 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.60s 2026-03-19 03:20:47.395766 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.32s 2026-03-19 03:20:47.395769 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.30s 2026-03-19 03:20:47.395773 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.17s 2026-03-19 03:20:47.395777 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.70s 2026-03-19 03:20:47.395781 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.54s 2026-03-19 03:20:47.395784 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.68s 2026-03-19 03:20:47.395788 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.54s 2026-03-19 03:20:47.395792 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.28s 2026-03-19 03:20:47.395795 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.76s 2026-03-19 03:20:47.395799 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.66s 2026-03-19 03:20:47.395804 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.63s 2026-03-19 03:20:47.395807 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.60s 2026-03-19 03:20:47.395811 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.51s 2026-03-19 03:20:47.395815 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.51s 2026-03-19 03:20:47.395819 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.45s 2026-03-19 03:20:47.395822 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.43s 2026-03-19 03:20:49.724497 | orchestrator | 2026-03-19 03:20:49 | INFO  | Task 86d7b1ef-c501-4005-b310-5fc11fd55db7 (aodh) was prepared for execution. 2026-03-19 03:20:49.724590 | orchestrator | 2026-03-19 03:20:49 | INFO  | It takes a moment until task 86d7b1ef-c501-4005-b310-5fc11fd55db7 (aodh) has been started and output is visible here. 2026-03-19 03:21:23.772268 | orchestrator | 2026-03-19 03:21:23.772373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:21:23.772382 | orchestrator | 2026-03-19 03:21:23.772389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:21:23.772395 | orchestrator | Thursday 19 March 2026 03:20:53 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-19 03:21:23.772401 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:21:23.772408 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:21:23.772414 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:21:23.772420 | orchestrator | 2026-03-19 03:21:23.772426 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:21:23.772445 | orchestrator | Thursday 19 March 2026 03:20:54 +0000 (0:00:00.327) 0:00:00.588 ******** 2026-03-19 03:21:23.772451 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-19 03:21:23.772457 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-19 03:21:23.772463 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-19 03:21:23.772469 | orchestrator | 2026-03-19 03:21:23.772474 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-19 03:21:23.772480 | orchestrator | 2026-03-19 03:21:23.772486 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-19 03:21:23.772492 | orchestrator | Thursday 19 March 2026 03:20:54 +0000 (0:00:00.476) 0:00:01.065 ******** 2026-03-19 03:21:23.772497 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:21:23.772556 | orchestrator | 2026-03-19 03:21:23.772568 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-19 03:21:23.772578 | orchestrator | Thursday 19 March 2026 03:20:55 +0000 (0:00:00.542) 0:00:01.608 ******** 2026-03-19 03:21:23.772587 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-19 03:21:23.772597 | orchestrator | 2026-03-19 03:21:23.772607 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-19 03:21:23.772617 | orchestrator | Thursday 19 March 2026 03:20:58 +0000 (0:00:03.674) 0:00:05.282 ******** 2026-03-19 03:21:23.772626 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-19 03:21:23.772636 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-19 03:21:23.772644 | orchestrator | 2026-03-19 03:21:23.772650 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-19 03:21:23.772656 | orchestrator | Thursday 19 March 2026 03:21:05 +0000 (0:00:06.939) 0:00:12.222 ******** 2026-03-19 03:21:23.772662 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:21:23.772668 | orchestrator | 2026-03-19 03:21:23.772674 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-19 03:21:23.772679 | orchestrator | Thursday 19 March 2026 03:21:09 +0000 (0:00:03.799) 0:00:16.022 ******** 2026-03-19 03:21:23.772685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:21:23.772691 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-19 03:21:23.772697 | orchestrator | 2026-03-19 03:21:23.772702 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-19 03:21:23.772708 | orchestrator | Thursday 19 March 2026 03:21:13 +0000 (0:00:04.241) 0:00:20.263 ******** 2026-03-19 03:21:23.772714 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:21:23.772720 | orchestrator | 2026-03-19 03:21:23.772725 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-19 03:21:23.772731 | orchestrator | Thursday 19 March 2026 03:21:17 +0000 (0:00:03.584) 0:00:23.848 ******** 2026-03-19 03:21:23.772737 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-19 03:21:23.772742 | orchestrator | 2026-03-19 03:21:23.772748 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-19 03:21:23.772754 | orchestrator | Thursday 19 March 2026 03:21:21 +0000 (0:00:04.157) 0:00:28.006 ******** 2026-03-19 03:21:23.772763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:23.772791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:23.772815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:23.772827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:23.772839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:23.772848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:23.772858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:23.772875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:25.071454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:25.071567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:25.071578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:25.071585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:25.071592 | orchestrator | 2026-03-19 03:21:25.071601 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-19 03:21:25.071609 | orchestrator | Thursday 19 March 2026 03:21:23 +0000 (0:00:02.094) 0:00:30.100 ******** 2026-03-19 03:21:25.071615 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:21:25.071622 | orchestrator | 2026-03-19 03:21:25.071629 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-19 03:21:25.071635 | orchestrator | Thursday 19 March 2026 03:21:23 +0000 (0:00:00.164) 0:00:30.265 ******** 2026-03-19 03:21:25.071642 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:21:25.071648 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:21:25.071653 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:21:25.071660 | orchestrator | 2026-03-19 03:21:25.071666 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-19 03:21:25.071672 | orchestrator | Thursday 19 March 2026 03:21:24 +0000 (0:00:00.524) 0:00:30.789 ******** 2026-03-19 03:21:25.071679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:25.071727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:25.071741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:25.071748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:25.071755 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:21:25.071761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:25.071768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:25.071774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:25.071792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.185441 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:21:30.185573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:30.185590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:30.185598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.185603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.185609 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:21:30.185616 | orchestrator | 2026-03-19 03:21:30.185624 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-19 03:21:30.185658 | orchestrator | Thursday 19 March 2026 03:21:25 +0000 (0:00:00.624) 0:00:31.414 ******** 2026-03-19 03:21:30.185667 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:21:30.185675 | orchestrator | 2026-03-19 03:21:30.185682 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-19 03:21:30.185689 | orchestrator | Thursday 19 March 2026 03:21:25 +0000 (0:00:00.726) 0:00:32.141 ******** 2026-03-19 03:21:30.185696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:30.185728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:30.185737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:30.185745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:30.185753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:30.185769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:30.185777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.185796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.811577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.811716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.811742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.811792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:30.811842 | orchestrator | 2026-03-19 03:21:30.811856 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-19 03:21:30.811869 | orchestrator | Thursday 19 March 2026 03:21:30 +0000 (0:00:04.383) 0:00:36.525 ******** 2026-03-19 03:21:30.811882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:30.811911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:30.811976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.811990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.812001 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:21:30.812013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:30.812033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:30.812045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.812056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:30.812067 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:21:30.812092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:31.809366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:31.809487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809588 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:21:31.809604 | orchestrator | 2026-03-19 03:21:31.809616 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-19 03:21:31.809630 | orchestrator | Thursday 19 March 2026 03:21:30 +0000 (0:00:00.630) 0:00:37.155 ******** 2026-03-19 03:21:31.809648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:31.809687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:31.809703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809753 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:21:31.809763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:31.809773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:31.809783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:31.809808 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:21:31.809826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-19 03:21:36.003396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 03:21:36.003537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 03:21:36.003550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 03:21:36.003555 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:21:36.003562 | orchestrator | 2026-03-19 03:21:36.003567 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-19 03:21:36.003573 | orchestrator | Thursday 19 March 2026 03:21:31 +0000 (0:00:00.994) 0:00:38.149 ******** 2026-03-19 03:21:36.003578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:36.003599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:36.003629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:36.003640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:36.003680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.320715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.320851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.320887 | orchestrator | 2026-03-19 03:21:44.320908 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-19 03:21:44.320926 | orchestrator | Thursday 19 March 2026 03:21:35 +0000 (0:00:04.191) 0:00:42.341 ******** 2026-03-19 03:21:44.320946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:44.320989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:44.321039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:44.321072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:44.321175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.338823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.338957 | orchestrator | 2026-03-19 03:21:49.338982 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-19 03:21:49.338998 | orchestrator | Thursday 19 March 2026 03:21:44 +0000 (0:00:08.319) 0:00:50.661 ******** 2026-03-19 03:21:49.339014 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:21:49.339029 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:21:49.339042 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:21:49.339051 | orchestrator | 2026-03-19 03:21:49.339060 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-19 03:21:49.339069 | orchestrator | Thursday 19 March 2026 03:21:45 +0000 (0:00:01.665) 0:00:52.326 ******** 2026-03-19 03:21:49.339079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:49.339109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:49.339147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-19 03:21:49.339174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:21:49.339485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:22:40.747903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-19 03:22:40.748000 | orchestrator | 2026-03-19 03:22:40.748011 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-19 03:22:40.748019 | orchestrator | Thursday 19 March 2026 03:21:49 +0000 (0:00:03.350) 0:00:55.676 ******** 2026-03-19 03:22:40.748025 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:22:40.748031 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:22:40.748038 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:22:40.748044 | orchestrator | 2026-03-19 03:22:40.748050 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-19 03:22:40.748056 | orchestrator | Thursday 19 March 2026 03:21:49 +0000 (0:00:00.308) 0:00:55.984 ******** 2026-03-19 03:22:40.748062 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748067 | orchestrator | 2026-03-19 03:22:40.748073 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-19 03:22:40.748078 | orchestrator | Thursday 19 March 2026 03:21:51 +0000 (0:00:02.300) 0:00:58.285 ******** 2026-03-19 03:22:40.748107 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748114 | orchestrator | 2026-03-19 03:22:40.748121 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-19 03:22:40.748126 | orchestrator | Thursday 19 March 2026 03:21:54 +0000 (0:00:02.605) 0:01:00.890 ******** 2026-03-19 03:22:40.748132 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748138 | orchestrator | 2026-03-19 03:22:40.748145 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-19 03:22:40.748152 | orchestrator | Thursday 19 March 2026 03:22:08 +0000 (0:00:13.606) 0:01:14.496 ******** 2026-03-19 03:22:40.748158 | orchestrator | 2026-03-19 03:22:40.748164 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-19 03:22:40.748170 | orchestrator | Thursday 19 March 2026 03:22:08 +0000 (0:00:00.064) 0:01:14.561 ******** 2026-03-19 03:22:40.748176 | orchestrator | 2026-03-19 03:22:40.748182 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-19 03:22:40.748189 | orchestrator | Thursday 19 March 2026 03:22:08 +0000 (0:00:00.066) 0:01:14.628 ******** 2026-03-19 03:22:40.748243 | orchestrator | 2026-03-19 03:22:40.748250 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-19 03:22:40.748271 | orchestrator | Thursday 19 March 2026 03:22:08 +0000 (0:00:00.187) 0:01:14.816 ******** 2026-03-19 03:22:40.748278 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748284 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:22:40.748291 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:22:40.748296 | orchestrator | 2026-03-19 03:22:40.748303 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-19 03:22:40.748309 | orchestrator | Thursday 19 March 2026 03:22:13 +0000 (0:00:05.411) 0:01:20.227 ******** 2026-03-19 03:22:40.748315 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748321 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:22:40.748327 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:22:40.748334 | orchestrator | 2026-03-19 03:22:40.748338 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-19 03:22:40.748342 | orchestrator | Thursday 19 March 2026 03:22:23 +0000 (0:00:10.074) 0:01:30.302 ******** 2026-03-19 03:22:40.748345 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:22:40.748349 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:22:40.748353 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748357 | orchestrator | 2026-03-19 03:22:40.748361 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-19 03:22:40.748364 | orchestrator | Thursday 19 March 2026 03:22:31 +0000 (0:00:08.000) 0:01:38.302 ******** 2026-03-19 03:22:40.748368 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:22:40.748372 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:22:40.748375 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:22:40.748379 | orchestrator | 2026-03-19 03:22:40.748383 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:22:40.748388 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:22:40.748393 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:22:40.748397 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:22:40.748401 | orchestrator | 2026-03-19 03:22:40.748405 | orchestrator | 2026-03-19 03:22:40.748409 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:22:40.748412 | orchestrator | Thursday 19 March 2026 03:22:40 +0000 (0:00:08.442) 0:01:46.744 ******** 2026-03-19 03:22:40.748416 | orchestrator | =============================================================================== 2026-03-19 03:22:40.748420 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.61s 2026-03-19 03:22:40.748430 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.07s 2026-03-19 03:22:40.748447 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 8.44s 2026-03-19 03:22:40.748452 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.32s 2026-03-19 03:22:40.748456 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 8.00s 2026-03-19 03:22:40.748461 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.94s 2026-03-19 03:22:40.748465 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.41s 2026-03-19 03:22:40.748469 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.38s 2026-03-19 03:22:40.748480 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.24s 2026-03-19 03:22:40.748485 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.19s 2026-03-19 03:22:40.748489 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 4.16s 2026-03-19 03:22:40.748493 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.80s 2026-03-19 03:22:40.748497 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.67s 2026-03-19 03:22:40.748501 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.58s 2026-03-19 03:22:40.748505 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.35s 2026-03-19 03:22:40.748510 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.61s 2026-03-19 03:22:40.748514 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.30s 2026-03-19 03:22:40.748518 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.09s 2026-03-19 03:22:40.748522 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.67s 2026-03-19 03:22:40.748527 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.99s 2026-03-19 03:22:43.124054 | orchestrator | 2026-03-19 03:22:43 | INFO  | Task fe34f9ef-92f9-4d59-a4ad-64d43fa5f044 (kolla-ceph-rgw) was prepared for execution. 2026-03-19 03:22:43.124156 | orchestrator | 2026-03-19 03:22:43 | INFO  | It takes a moment until task fe34f9ef-92f9-4d59-a4ad-64d43fa5f044 (kolla-ceph-rgw) has been started and output is visible here. 2026-03-19 03:23:18.015878 | orchestrator | 2026-03-19 03:23:18.015963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:23:18.015972 | orchestrator | 2026-03-19 03:23:18.015978 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:23:18.015984 | orchestrator | Thursday 19 March 2026 03:22:47 +0000 (0:00:00.289) 0:00:00.289 ******** 2026-03-19 03:23:18.015990 | orchestrator | ok: [testbed-manager] 2026-03-19 03:23:18.015996 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:23:18.016002 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:23:18.016018 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:23:18.016024 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:23:18.016029 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:23:18.016034 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:23:18.016039 | orchestrator | 2026-03-19 03:23:18.016045 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:23:18.016050 | orchestrator | Thursday 19 March 2026 03:22:48 +0000 (0:00:00.901) 0:00:01.190 ******** 2026-03-19 03:23:18.016056 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016061 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016067 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016072 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016077 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016100 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016105 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-19 03:23:18.016110 | orchestrator | 2026-03-19 03:23:18.016116 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-19 03:23:18.016121 | orchestrator | 2026-03-19 03:23:18.016126 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-19 03:23:18.016131 | orchestrator | Thursday 19 March 2026 03:22:49 +0000 (0:00:00.775) 0:00:01.965 ******** 2026-03-19 03:23:18.016136 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:23:18.016143 | orchestrator | 2026-03-19 03:23:18.016148 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-19 03:23:18.016154 | orchestrator | Thursday 19 March 2026 03:22:50 +0000 (0:00:01.545) 0:00:03.511 ******** 2026-03-19 03:23:18.016159 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-19 03:23:18.016165 | orchestrator | 2026-03-19 03:23:18.016170 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-19 03:23:18.016218 | orchestrator | Thursday 19 March 2026 03:22:54 +0000 (0:00:03.500) 0:00:07.012 ******** 2026-03-19 03:23:18.016224 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-19 03:23:18.016232 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-19 03:23:18.016237 | orchestrator | 2026-03-19 03:23:18.016244 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-19 03:23:18.016252 | orchestrator | Thursday 19 March 2026 03:22:59 +0000 (0:00:05.610) 0:00:12.623 ******** 2026-03-19 03:23:18.016261 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-19 03:23:18.016269 | orchestrator | 2026-03-19 03:23:18.016286 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-19 03:23:18.016295 | orchestrator | Thursday 19 March 2026 03:23:02 +0000 (0:00:03.067) 0:00:15.690 ******** 2026-03-19 03:23:18.016311 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:23:18.016320 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-19 03:23:18.016329 | orchestrator | 2026-03-19 03:23:18.016337 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-19 03:23:18.016345 | orchestrator | Thursday 19 March 2026 03:23:06 +0000 (0:00:03.832) 0:00:19.523 ******** 2026-03-19 03:23:18.016350 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-19 03:23:18.016356 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-19 03:23:18.016361 | orchestrator | 2026-03-19 03:23:18.016366 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-19 03:23:18.016371 | orchestrator | Thursday 19 March 2026 03:23:12 +0000 (0:00:05.998) 0:00:25.521 ******** 2026-03-19 03:23:18.016376 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-19 03:23:18.016381 | orchestrator | 2026-03-19 03:23:18.016388 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:23:18.016397 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016410 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016420 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016427 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016435 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016467 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016476 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:18.016485 | orchestrator | 2026-03-19 03:23:18.016492 | orchestrator | 2026-03-19 03:23:18.016498 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:23:18.016504 | orchestrator | Thursday 19 March 2026 03:23:17 +0000 (0:00:04.850) 0:00:30.372 ******** 2026-03-19 03:23:18.016514 | orchestrator | =============================================================================== 2026-03-19 03:23:18.016520 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.00s 2026-03-19 03:23:18.016526 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.61s 2026-03-19 03:23:18.016532 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.85s 2026-03-19 03:23:18.016537 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.83s 2026-03-19 03:23:18.016543 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.50s 2026-03-19 03:23:18.016549 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.07s 2026-03-19 03:23:18.016555 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.55s 2026-03-19 03:23:18.016561 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2026-03-19 03:23:18.016567 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-03-19 03:23:20.381822 | orchestrator | 2026-03-19 03:23:20 | INFO  | Task 57b425e6-8a0b-464a-a5d2-d839aeaecb10 (gnocchi) was prepared for execution. 2026-03-19 03:23:20.381893 | orchestrator | 2026-03-19 03:23:20 | INFO  | It takes a moment until task 57b425e6-8a0b-464a-a5d2-d839aeaecb10 (gnocchi) has been started and output is visible here. 2026-03-19 03:23:25.747886 | orchestrator | 2026-03-19 03:23:25.747986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:23:25.747998 | orchestrator | 2026-03-19 03:23:25.748006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:23:25.748015 | orchestrator | Thursday 19 March 2026 03:23:24 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-03-19 03:23:25.748021 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:23:25.748028 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:23:25.748034 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:23:25.748040 | orchestrator | 2026-03-19 03:23:25.748046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:23:25.748052 | orchestrator | Thursday 19 March 2026 03:23:25 +0000 (0:00:00.339) 0:00:00.625 ******** 2026-03-19 03:23:25.748058 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-19 03:23:25.748065 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-19 03:23:25.748072 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-19 03:23:25.748078 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-19 03:23:25.748084 | orchestrator | 2026-03-19 03:23:25.748090 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-19 03:23:25.748096 | orchestrator | skipping: no hosts matched 2026-03-19 03:23:25.748103 | orchestrator | 2026-03-19 03:23:25.748109 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:23:25.748116 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:25.748124 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:25.748155 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:23:25.748160 | orchestrator | 2026-03-19 03:23:25.748166 | orchestrator | 2026-03-19 03:23:25.748208 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:23:25.748214 | orchestrator | Thursday 19 March 2026 03:23:25 +0000 (0:00:00.385) 0:00:01.010 ******** 2026-03-19 03:23:25.748220 | orchestrator | =============================================================================== 2026-03-19 03:23:25.748225 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-03-19 03:23:25.748232 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-19 03:23:28.073573 | orchestrator | 2026-03-19 03:23:28 | INFO  | Task a09f0add-deef-4d4d-ba15-b76516b22518 (manila) was prepared for execution. 2026-03-19 03:23:28.073684 | orchestrator | 2026-03-19 03:23:28 | INFO  | It takes a moment until task a09f0add-deef-4d4d-ba15-b76516b22518 (manila) has been started and output is visible here. 2026-03-19 03:24:12.403012 | orchestrator | 2026-03-19 03:24:12.403133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:24:12.403161 | orchestrator | 2026-03-19 03:24:12.403180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:24:12.403200 | orchestrator | Thursday 19 March 2026 03:23:32 +0000 (0:00:00.310) 0:00:00.310 ******** 2026-03-19 03:24:12.403220 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:24:12.403289 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:24:12.403341 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:24:12.403360 | orchestrator | 2026-03-19 03:24:12.403379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:24:12.403399 | orchestrator | Thursday 19 March 2026 03:23:32 +0000 (0:00:00.332) 0:00:00.642 ******** 2026-03-19 03:24:12.403417 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-19 03:24:12.403436 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-19 03:24:12.403456 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-19 03:24:12.403474 | orchestrator | 2026-03-19 03:24:12.403492 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-19 03:24:12.403511 | orchestrator | 2026-03-19 03:24:12.403531 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-19 03:24:12.403571 | orchestrator | Thursday 19 March 2026 03:23:33 +0000 (0:00:00.469) 0:00:01.111 ******** 2026-03-19 03:24:12.403590 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:24:12.403610 | orchestrator | 2026-03-19 03:24:12.403665 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-19 03:24:12.403685 | orchestrator | Thursday 19 March 2026 03:23:33 +0000 (0:00:00.570) 0:00:01.682 ******** 2026-03-19 03:24:12.403704 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:12.403725 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:24:12.403745 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:24:12.403763 | orchestrator | 2026-03-19 03:24:12.403782 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-19 03:24:12.403800 | orchestrator | Thursday 19 March 2026 03:23:34 +0000 (0:00:00.488) 0:00:02.170 ******** 2026-03-19 03:24:12.403818 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-19 03:24:12.403836 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-19 03:24:12.403854 | orchestrator | 2026-03-19 03:24:12.403874 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-19 03:24:12.403894 | orchestrator | Thursday 19 March 2026 03:23:41 +0000 (0:00:07.047) 0:00:09.218 ******** 2026-03-19 03:24:12.403913 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-19 03:24:12.403961 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-19 03:24:12.403981 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-19 03:24:12.404001 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-19 03:24:12.404020 | orchestrator | 2026-03-19 03:24:12.404038 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-19 03:24:12.404055 | orchestrator | Thursday 19 March 2026 03:23:54 +0000 (0:00:13.540) 0:00:22.759 ******** 2026-03-19 03:24:12.404089 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:24:12.404107 | orchestrator | 2026-03-19 03:24:12.404125 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-19 03:24:12.404143 | orchestrator | Thursday 19 March 2026 03:23:58 +0000 (0:00:03.463) 0:00:26.223 ******** 2026-03-19 03:24:12.404161 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:24:12.404180 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-19 03:24:12.404194 | orchestrator | 2026-03-19 03:24:12.404205 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-19 03:24:12.404216 | orchestrator | Thursday 19 March 2026 03:24:02 +0000 (0:00:04.225) 0:00:30.448 ******** 2026-03-19 03:24:12.404226 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:24:12.404243 | orchestrator | 2026-03-19 03:24:12.404261 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-19 03:24:12.404279 | orchestrator | Thursday 19 March 2026 03:24:05 +0000 (0:00:03.421) 0:00:33.870 ******** 2026-03-19 03:24:12.404296 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-19 03:24:12.404341 | orchestrator | 2026-03-19 03:24:12.404360 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-19 03:24:12.404377 | orchestrator | Thursday 19 March 2026 03:24:10 +0000 (0:00:04.193) 0:00:38.063 ******** 2026-03-19 03:24:12.404428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:12.404461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:12.404482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:12.404517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:12.404530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:12.404541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:12.404562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:22.597969 | orchestrator | 2026-03-19 03:24:22.597974 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-19 03:24:22.598001 | orchestrator | Thursday 19 March 2026 03:24:12 +0000 (0:00:02.363) 0:00:40.427 ******** 2026-03-19 03:24:22.598006 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:24:22.598036 | orchestrator | 2026-03-19 03:24:22.598041 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-19 03:24:22.598045 | orchestrator | Thursday 19 March 2026 03:24:13 +0000 (0:00:00.558) 0:00:40.986 ******** 2026-03-19 03:24:22.598049 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:24:22.598054 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:24:22.598057 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:24:22.598061 | orchestrator | 2026-03-19 03:24:22.598065 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-19 03:24:22.598069 | orchestrator | Thursday 19 March 2026 03:24:14 +0000 (0:00:00.969) 0:00:41.955 ******** 2026-03-19 03:24:22.598074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598089 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598093 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598109 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598118 | orchestrator | 2026-03-19 03:24:22.598122 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-19 03:24:22.598125 | orchestrator | Thursday 19 March 2026 03:24:15 +0000 (0:00:01.695) 0:00:43.650 ******** 2026-03-19 03:24:22.598130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598134 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-19 03:24:22.598145 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598149 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-19 03:24:22.598153 | orchestrator | 2026-03-19 03:24:22.598156 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-19 03:24:22.598160 | orchestrator | Thursday 19 March 2026 03:24:16 +0000 (0:00:01.161) 0:00:44.812 ******** 2026-03-19 03:24:22.598168 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-19 03:24:22.598172 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-19 03:24:22.598176 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-19 03:24:22.598180 | orchestrator | 2026-03-19 03:24:22.598184 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-19 03:24:22.598188 | orchestrator | Thursday 19 March 2026 03:24:17 +0000 (0:00:00.631) 0:00:45.443 ******** 2026-03-19 03:24:22.598192 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:22.598195 | orchestrator | 2026-03-19 03:24:22.598199 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-19 03:24:22.598203 | orchestrator | Thursday 19 March 2026 03:24:17 +0000 (0:00:00.121) 0:00:45.564 ******** 2026-03-19 03:24:22.598207 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:22.598210 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:24:22.598214 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:24:22.598218 | orchestrator | 2026-03-19 03:24:22.598222 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-19 03:24:22.598225 | orchestrator | Thursday 19 March 2026 03:24:18 +0000 (0:00:00.422) 0:00:45.987 ******** 2026-03-19 03:24:22.598229 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:24:22.598233 | orchestrator | 2026-03-19 03:24:22.598237 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-19 03:24:22.598245 | orchestrator | Thursday 19 March 2026 03:24:18 +0000 (0:00:00.534) 0:00:46.522 ******** 2026-03-19 03:24:22.598253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:23.461857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:23.461947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:23.461959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.461968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.461994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:23.462128 | orchestrator | 2026-03-19 03:24:23.462137 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-19 03:24:23.462145 | orchestrator | Thursday 19 March 2026 03:24:22 +0000 (0:00:04.121) 0:00:50.643 ******** 2026-03-19 03:24:23.462159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:24.115782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.115884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.115902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.115915 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:24.115928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:24.115967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.115978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.116013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.116024 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:24:24.116033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:24.116043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.116061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.116071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:24.116081 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:24:24.116091 | orchestrator | 2026-03-19 03:24:24.116102 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-19 03:24:24.116114 | orchestrator | Thursday 19 March 2026 03:24:23 +0000 (0:00:00.862) 0:00:51.505 ******** 2026-03-19 03:24:24.116138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:28.804932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805075 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:28.805085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:28.805094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805142 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:24:28.805151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:28.805170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:28.805204 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:24:28.805214 | orchestrator | 2026-03-19 03:24:28.805227 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-19 03:24:28.805240 | orchestrator | Thursday 19 March 2026 03:24:24 +0000 (0:00:00.887) 0:00:52.393 ******** 2026-03-19 03:24:28.805258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:35.642531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:35.642675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:35.642694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:35.642840 | orchestrator | 2026-03-19 03:24:35.642852 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-19 03:24:35.642863 | orchestrator | Thursday 19 March 2026 03:24:29 +0000 (0:00:04.644) 0:00:57.037 ******** 2026-03-19 03:24:35.642885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:39.958805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:39.958911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:24:39.958929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.958938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.958960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:39.958983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:39.959010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.959017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:39.959025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.959031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.959037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:24:39.959045 | orchestrator | 2026-03-19 03:24:39.959053 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-19 03:24:39.959065 | orchestrator | Thursday 19 March 2026 03:24:35 +0000 (0:00:06.644) 0:01:03.681 ******** 2026-03-19 03:24:39.959072 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-19 03:24:39.959078 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-19 03:24:39.959084 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-19 03:24:39.959097 | orchestrator | 2026-03-19 03:24:39.959104 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-19 03:24:39.959110 | orchestrator | Thursday 19 March 2026 03:24:39 +0000 (0:00:03.657) 0:01:07.339 ******** 2026-03-19 03:24:39.959147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:43.315172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315270 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:24:43.315278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:43.315314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315345 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:24:43.315351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-19 03:24:43.315357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 03:24:43.315383 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:24:43.315389 | orchestrator | 2026-03-19 03:24:43.315472 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-19 03:24:43.315487 | orchestrator | Thursday 19 March 2026 03:24:40 +0000 (0:00:00.668) 0:01:08.008 ******** 2026-03-19 03:24:43.315503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:25:24.377295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:25:24.377432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-19 03:25:24.377458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-19 03:25:24.377874 | orchestrator | 2026-03-19 03:25:24.377895 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-19 03:25:24.377918 | orchestrator | Thursday 19 March 2026 03:24:43 +0000 (0:00:03.339) 0:01:11.347 ******** 2026-03-19 03:25:24.377940 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:25:24.377966 | orchestrator | 2026-03-19 03:25:24.377989 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-19 03:25:24.378014 | orchestrator | Thursday 19 March 2026 03:24:45 +0000 (0:00:02.368) 0:01:13.715 ******** 2026-03-19 03:25:24.378114 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:25:24.378135 | orchestrator | 2026-03-19 03:25:24.378154 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-19 03:25:24.378174 | orchestrator | Thursday 19 March 2026 03:24:48 +0000 (0:00:02.540) 0:01:16.256 ******** 2026-03-19 03:25:24.378192 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:25:24.378209 | orchestrator | 2026-03-19 03:25:24.378224 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-19 03:25:24.378240 | orchestrator | Thursday 19 March 2026 03:25:24 +0000 (0:00:35.805) 0:01:52.062 ******** 2026-03-19 03:25:24.378255 | orchestrator | 2026-03-19 03:25:24.378285 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-19 03:26:06.997635 | orchestrator | Thursday 19 March 2026 03:25:24 +0000 (0:00:00.086) 0:01:52.149 ******** 2026-03-19 03:26:06.997787 | orchestrator | 2026-03-19 03:26:06.997795 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-19 03:26:06.997800 | orchestrator | Thursday 19 March 2026 03:25:24 +0000 (0:00:00.072) 0:01:52.221 ******** 2026-03-19 03:26:06.997804 | orchestrator | 2026-03-19 03:26:06.997808 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-19 03:26:06.997813 | orchestrator | Thursday 19 March 2026 03:25:24 +0000 (0:00:00.088) 0:01:52.309 ******** 2026-03-19 03:26:06.997819 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:26:06.997829 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:26:06.997838 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:26:06.997843 | orchestrator | 2026-03-19 03:26:06.997850 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-19 03:26:06.997856 | orchestrator | Thursday 19 March 2026 03:25:34 +0000 (0:00:09.761) 0:02:02.071 ******** 2026-03-19 03:26:06.997862 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:26:06.997868 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:26:06.997875 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:26:06.997904 | orchestrator | 2026-03-19 03:26:06.997911 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-19 03:26:06.997917 | orchestrator | Thursday 19 March 2026 03:25:45 +0000 (0:00:11.158) 0:02:13.229 ******** 2026-03-19 03:26:06.997923 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:26:06.997929 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:26:06.997936 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:26:06.997941 | orchestrator | 2026-03-19 03:26:06.997948 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-19 03:26:06.997954 | orchestrator | Thursday 19 March 2026 03:25:55 +0000 (0:00:10.177) 0:02:23.407 ******** 2026-03-19 03:26:06.997960 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:26:06.997965 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:26:06.997972 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:26:06.997977 | orchestrator | 2026-03-19 03:26:06.997984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:26:06.997992 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:26:06.998000 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:26:06.998006 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:26:06.998012 | orchestrator | 2026-03-19 03:26:06.998071 | orchestrator | 2026-03-19 03:26:06.998077 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:26:06.998081 | orchestrator | Thursday 19 March 2026 03:26:06 +0000 (0:00:11.088) 0:02:34.496 ******** 2026-03-19 03:26:06.998085 | orchestrator | =============================================================================== 2026-03-19 03:26:06.998089 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 35.81s 2026-03-19 03:26:06.998093 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.54s 2026-03-19 03:26:06.998097 | orchestrator | manila : Restart manila-data container --------------------------------- 11.16s 2026-03-19 03:26:06.998100 | orchestrator | manila : Restart manila-share container -------------------------------- 11.09s 2026-03-19 03:26:06.998114 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.18s 2026-03-19 03:26:06.998118 | orchestrator | manila : Restart manila-api container ----------------------------------- 9.76s 2026-03-19 03:26:06.998121 | orchestrator | service-ks-register : manila | Creating services ------------------------ 7.05s 2026-03-19 03:26:06.998125 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.64s 2026-03-19 03:26:06.998129 | orchestrator | manila : Copying over config.json files for services -------------------- 4.64s 2026-03-19 03:26:06.998133 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.23s 2026-03-19 03:26:06.998136 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.19s 2026-03-19 03:26:06.998140 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.12s 2026-03-19 03:26:06.998144 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.66s 2026-03-19 03:26:06.998148 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.46s 2026-03-19 03:26:06.998151 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.42s 2026-03-19 03:26:06.998155 | orchestrator | manila : Check manila containers ---------------------------------------- 3.34s 2026-03-19 03:26:06.998159 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.54s 2026-03-19 03:26:06.998163 | orchestrator | manila : Creating Manila database --------------------------------------- 2.37s 2026-03-19 03:26:06.998166 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.36s 2026-03-19 03:26:06.998177 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.70s 2026-03-19 03:26:07.295755 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-19 03:26:19.426888 | orchestrator | 2026-03-19 03:26:19 | INFO  | Task ac9388a2-fe35-4cc6-a465-d6f05a7c202c (netdata) was prepared for execution. 2026-03-19 03:26:19.427002 | orchestrator | 2026-03-19 03:26:19 | INFO  | It takes a moment until task ac9388a2-fe35-4cc6-a465-d6f05a7c202c (netdata) has been started and output is visible here. 2026-03-19 03:27:55.475782 | orchestrator | 2026-03-19 03:27:55.475859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:27:55.475866 | orchestrator | 2026-03-19 03:27:55.475871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:27:55.475876 | orchestrator | Thursday 19 March 2026 03:26:23 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-03-19 03:27:55.475880 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-19 03:27:55.475886 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-19 03:27:55.475890 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-19 03:27:55.475894 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-19 03:27:55.475898 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-19 03:27:55.475902 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-19 03:27:55.475972 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-19 03:27:55.475976 | orchestrator | 2026-03-19 03:27:55.475980 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-19 03:27:55.475984 | orchestrator | 2026-03-19 03:27:55.475988 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-19 03:27:55.475992 | orchestrator | Thursday 19 March 2026 03:26:24 +0000 (0:00:00.817) 0:00:01.056 ******** 2026-03-19 03:27:55.475998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:27:55.476004 | orchestrator | 2026-03-19 03:27:55.476008 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-19 03:27:55.476012 | orchestrator | Thursday 19 March 2026 03:26:25 +0000 (0:00:00.964) 0:00:02.020 ******** 2026-03-19 03:27:55.476016 | orchestrator | ok: [testbed-manager] 2026-03-19 03:27:55.476023 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:27:55.476029 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:27:55.476035 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:27:55.476042 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:27:55.476049 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:27:55.476055 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:27:55.476061 | orchestrator | 2026-03-19 03:27:55.476067 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-19 03:27:55.476074 | orchestrator | Thursday 19 March 2026 03:26:27 +0000 (0:00:01.616) 0:00:03.637 ******** 2026-03-19 03:27:55.476079 | orchestrator | ok: [testbed-manager] 2026-03-19 03:27:55.476087 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:27:55.476093 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:27:55.476100 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:27:55.476105 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:27:55.476113 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:27:55.476119 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:27:55.476125 | orchestrator | 2026-03-19 03:27:55.476131 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-19 03:27:55.476138 | orchestrator | Thursday 19 March 2026 03:26:29 +0000 (0:00:02.111) 0:00:05.748 ******** 2026-03-19 03:27:55.476145 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476152 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:27:55.476158 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:27:55.476189 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:27:55.476197 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:27:55.476203 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:27:55.476209 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:27:55.476215 | orchestrator | 2026-03-19 03:27:55.476234 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-19 03:27:55.476241 | orchestrator | Thursday 19 March 2026 03:26:30 +0000 (0:00:01.611) 0:00:07.360 ******** 2026-03-19 03:27:55.476247 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476256 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:27:55.476264 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:27:55.476272 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:27:55.476278 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:27:55.476284 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:27:55.476291 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:27:55.476297 | orchestrator | 2026-03-19 03:27:55.476303 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-19 03:27:55.476309 | orchestrator | Thursday 19 March 2026 03:26:47 +0000 (0:00:16.578) 0:00:23.938 ******** 2026-03-19 03:27:55.476314 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476321 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:27:55.476327 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:27:55.476333 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:27:55.476338 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:27:55.476345 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:27:55.476351 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:27:55.476356 | orchestrator | 2026-03-19 03:27:55.476362 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-19 03:27:55.476369 | orchestrator | Thursday 19 March 2026 03:27:29 +0000 (0:00:42.437) 0:01:06.376 ******** 2026-03-19 03:27:55.476377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:27:55.476385 | orchestrator | 2026-03-19 03:27:55.476391 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-19 03:27:55.476398 | orchestrator | Thursday 19 March 2026 03:27:31 +0000 (0:00:01.641) 0:01:08.018 ******** 2026-03-19 03:27:55.476404 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-19 03:27:55.476411 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-19 03:27:55.476418 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-19 03:27:55.476425 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-19 03:27:55.476448 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-19 03:27:55.476455 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-19 03:27:55.476464 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-19 03:27:55.476470 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-19 03:27:55.476476 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-19 03:27:55.476482 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-19 03:27:55.476489 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-19 03:27:55.476495 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-19 03:27:55.476502 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-19 03:27:55.476509 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-19 03:27:55.476516 | orchestrator | 2026-03-19 03:27:55.476522 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-19 03:27:55.476531 | orchestrator | Thursday 19 March 2026 03:27:35 +0000 (0:00:03.784) 0:01:11.803 ******** 2026-03-19 03:27:55.476537 | orchestrator | ok: [testbed-manager] 2026-03-19 03:27:55.476543 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:27:55.476559 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:27:55.476566 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:27:55.476573 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:27:55.476579 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:27:55.476585 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:27:55.476592 | orchestrator | 2026-03-19 03:27:55.476599 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-19 03:27:55.476606 | orchestrator | Thursday 19 March 2026 03:27:36 +0000 (0:00:01.347) 0:01:13.150 ******** 2026-03-19 03:27:55.476612 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476620 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:27:55.476629 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:27:55.476639 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:27:55.476645 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:27:55.476651 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:27:55.476657 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:27:55.476663 | orchestrator | 2026-03-19 03:27:55.476670 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-19 03:27:55.476677 | orchestrator | Thursday 19 March 2026 03:27:37 +0000 (0:00:01.300) 0:01:14.451 ******** 2026-03-19 03:27:55.476684 | orchestrator | ok: [testbed-manager] 2026-03-19 03:27:55.476690 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:27:55.476698 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:27:55.476706 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:27:55.476714 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:27:55.476721 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:27:55.476727 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:27:55.476734 | orchestrator | 2026-03-19 03:27:55.476741 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-19 03:27:55.476748 | orchestrator | Thursday 19 March 2026 03:27:39 +0000 (0:00:01.235) 0:01:15.686 ******** 2026-03-19 03:27:55.476754 | orchestrator | ok: [testbed-manager] 2026-03-19 03:27:55.476761 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:27:55.476768 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:27:55.476774 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:27:55.476780 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:27:55.476786 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:27:55.476792 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:27:55.476799 | orchestrator | 2026-03-19 03:27:55.476804 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-19 03:27:55.476810 | orchestrator | Thursday 19 March 2026 03:27:40 +0000 (0:00:01.699) 0:01:17.385 ******** 2026-03-19 03:27:55.476824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-19 03:27:55.476832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:27:55.476839 | orchestrator | 2026-03-19 03:27:55.476845 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-19 03:27:55.476852 | orchestrator | Thursday 19 March 2026 03:27:42 +0000 (0:00:01.376) 0:01:18.761 ******** 2026-03-19 03:27:55.476857 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476864 | orchestrator | 2026-03-19 03:27:55.476870 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-19 03:27:55.476877 | orchestrator | Thursday 19 March 2026 03:27:44 +0000 (0:00:02.122) 0:01:20.884 ******** 2026-03-19 03:27:55.476883 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:27:55.476889 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:27:55.476895 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:27:55.476900 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:27:55.476931 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:27:55.476937 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:27:55.476943 | orchestrator | changed: [testbed-manager] 2026-03-19 03:27:55.476956 | orchestrator | 2026-03-19 03:27:55.476963 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:27:55.476970 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.476978 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.476983 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.476989 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.477003 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.928267 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.928339 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:27:55.928345 | orchestrator | 2026-03-19 03:27:55.928349 | orchestrator | 2026-03-19 03:27:55.928354 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:27:55.928360 | orchestrator | Thursday 19 March 2026 03:27:55 +0000 (0:00:11.117) 0:01:32.001 ******** 2026-03-19 03:27:55.928364 | orchestrator | =============================================================================== 2026-03-19 03:27:55.928368 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.44s 2026-03-19 03:27:55.928372 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.58s 2026-03-19 03:27:55.928376 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.12s 2026-03-19 03:27:55.928380 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.78s 2026-03-19 03:27:55.928383 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.12s 2026-03-19 03:27:55.928387 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.11s 2026-03-19 03:27:55.928391 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.70s 2026-03-19 03:27:55.928395 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.64s 2026-03-19 03:27:55.928398 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.62s 2026-03-19 03:27:55.928402 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.61s 2026-03-19 03:27:55.928406 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.38s 2026-03-19 03:27:55.928410 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.35s 2026-03-19 03:27:55.928414 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.30s 2026-03-19 03:27:55.928418 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.24s 2026-03-19 03:27:55.928422 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 0.96s 2026-03-19 03:27:55.928426 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-19 03:27:58.253736 | orchestrator | 2026-03-19 03:27:58 | INFO  | Task bd6ac652-d288-4444-871e-13c767897394 (prometheus) was prepared for execution. 2026-03-19 03:27:58.253807 | orchestrator | 2026-03-19 03:27:58 | INFO  | It takes a moment until task bd6ac652-d288-4444-871e-13c767897394 (prometheus) has been started and output is visible here. 2026-03-19 03:28:07.767104 | orchestrator | 2026-03-19 03:28:07.767257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:28:07.767273 | orchestrator | 2026-03-19 03:28:07.767315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:28:07.767347 | orchestrator | Thursday 19 March 2026 03:28:02 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-03-19 03:28:07.767356 | orchestrator | ok: [testbed-manager] 2026-03-19 03:28:07.767368 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:28:07.767377 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:28:07.767386 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:28:07.767394 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:28:07.767405 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:28:07.767415 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:28:07.767424 | orchestrator | 2026-03-19 03:28:07.767434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:28:07.767443 | orchestrator | Thursday 19 March 2026 03:28:03 +0000 (0:00:00.880) 0:00:01.158 ******** 2026-03-19 03:28:07.767453 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767473 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767481 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767490 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767499 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767507 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767517 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-19 03:28:07.767525 | orchestrator | 2026-03-19 03:28:07.767534 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-19 03:28:07.767542 | orchestrator | 2026-03-19 03:28:07.767552 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-19 03:28:07.767562 | orchestrator | Thursday 19 March 2026 03:28:04 +0000 (0:00:00.957) 0:00:02.116 ******** 2026-03-19 03:28:07.767573 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:28:07.767586 | orchestrator | 2026-03-19 03:28:07.767596 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-19 03:28:07.767603 | orchestrator | Thursday 19 March 2026 03:28:05 +0000 (0:00:01.395) 0:00:03.512 ******** 2026-03-19 03:28:07.767615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 03:28:07.767627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767652 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:07.767701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:07.767708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:07.767722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:07.767734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:07.767745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:08.698877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:08.699060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:08.699100 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 03:28:08.699114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:08.699192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699225 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:08.699236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:08.699288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:08.699312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:13.702272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:13.702399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:13.702408 | orchestrator | 2026-03-19 03:28:13.702414 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-19 03:28:13.702422 | orchestrator | Thursday 19 March 2026 03:28:08 +0000 (0:00:02.836) 0:00:06.348 ******** 2026-03-19 03:28:13.702427 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:28:13.702433 | orchestrator | 2026-03-19 03:28:13.702437 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-19 03:28:13.702441 | orchestrator | Thursday 19 March 2026 03:28:10 +0000 (0:00:01.634) 0:00:07.982 ******** 2026-03-19 03:28:13.702446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 03:28:13.702481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702536 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:13.702546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:13.702550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:13.702554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:13.702561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:13.702574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:16.198497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:16.198504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:16.198512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198550 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 03:28:16.198560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:16.198602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:16.198609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:16.198624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:17.121259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:17.121353 | orchestrator | 2026-03-19 03:28:17.121361 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-19 03:28:17.121367 | orchestrator | Thursday 19 March 2026 03:28:16 +0000 (0:00:05.868) 0:00:13.850 ******** 2026-03-19 03:28:17.121373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 03:28:17.121379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.121384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.121426 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 03:28:17.121444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.121448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.121457 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:28:17.121462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.121466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.121471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.121475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.121490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.121494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.121502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.831045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.831127 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:28:17.831133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.831138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.831143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.831161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:17.831185 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:28:17.831189 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:28:17.831206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.831210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831218 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:28:17.831222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.831226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:17.831237 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:28:17.831240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:17.831254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:18.585229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:18.585310 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:28:18.585319 | orchestrator | 2026-03-19 03:28:18.585325 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-19 03:28:18.585332 | orchestrator | Thursday 19 March 2026 03:28:17 +0000 (0:00:01.631) 0:00:15.482 ******** 2026-03-19 03:28:18.585338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:18.585344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:18.585376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-19 03:28:18.585416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:18.585421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585427 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:18.585432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:18.585450 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:18.585455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:18.585466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-19 03:28:19.842084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:19.842191 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:28:19.842207 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:28:19.842222 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:28:19.842238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:19.842280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:19.842327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:19.842347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:19.842356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842365 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:28:19.842393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 03:28:19.842411 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:28:19.842421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:19.842453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842490 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:28:19.842505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 03:28:19.842533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 03:28:19.842557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 03:28:23.373366 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:28:23.373455 | orchestrator | 2026-03-19 03:28:23.373466 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-19 03:28:23.373478 | orchestrator | Thursday 19 March 2026 03:28:19 +0000 (0:00:01.998) 0:00:17.481 ******** 2026-03-19 03:28:23.373493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 03:28:23.373505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373603 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:28:23.373624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:23.373643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:23.373651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:23.373662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:23.373669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:23.373676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:23.373689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 03:28:27.108784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:28:27.108824 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:28:27.108841 | orchestrator | 2026-03-19 03:28:27.108846 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-19 03:28:27.108851 | orchestrator | Thursday 19 March 2026 03:28:26 +0000 (0:00:06.379) 0:00:23.860 ******** 2026-03-19 03:28:27.108855 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 03:28:27.108863 | orchestrator | 2026-03-19 03:28:27.108867 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-19 03:28:27.108873 | orchestrator | Thursday 19 March 2026 03:28:27 +0000 (0:00:00.905) 0:00:24.765 ******** 2026-03-19 03:28:30.012466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012563 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012601 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012608 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:30.012614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012636 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012675 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3399448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012691 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012698 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:30.012722 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.550776 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.550894 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.550936 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.550955 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.550971 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313166, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.346889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:31.551139 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551161 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551193 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551218 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551233 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:31.551258 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348098 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348195 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348206 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348213 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348236 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348242 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348248 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348287 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348294 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348319 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313123, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.339135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:33.348325 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348331 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:33.348342 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682378 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682494 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682592 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682607 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682651 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682724 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:34.682785 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313154, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.344156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:35.961620 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961719 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961728 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961735 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961743 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961805 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961814 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961822 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961845 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961857 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:35.961884 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150834 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150854 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150879 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.150948 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313116, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3362877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:37.150982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151085 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151100 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151125 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:37.151193 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426377 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426404 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313140, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3411074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:38.426477 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426490 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426501 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426527 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426548 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:28:38.426561 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426571 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:28:38.426582 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426604 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426625 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:38.426641 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462091 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462178 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462212 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:28:45.462221 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462228 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:28:45.462247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462254 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313152, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3431442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:45.462261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462281 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462288 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462307 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:28:45.462314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-19 03:28:45.462320 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:28:45.462329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.341505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:45.462334 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313136, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3394399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:45.462338 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313163, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3466008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:28:45.462345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313110, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.335211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208529 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.349245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208662 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313161, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.345945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208680 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.337435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208709 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313113, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3359573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208721 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313149, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3429873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208733 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.34207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208744 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3479447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-19 03:29:11.208756 | orchestrator | 2026-03-19 03:29:11.208787 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-19 03:29:11.208802 | orchestrator | Thursday 19 March 2026 03:28:51 +0000 (0:00:24.618) 0:00:49.384 ******** 2026-03-19 03:29:11.208814 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 03:29:11.208837 | orchestrator | 2026-03-19 03:29:11.208848 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-19 03:29:11.208861 | orchestrator | Thursday 19 March 2026 03:28:52 +0000 (0:00:00.790) 0:00:50.174 ******** 2026-03-19 03:29:11.208873 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.208886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.208898 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.208909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.208920 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.208931 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.208942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.208952 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.208958 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.208965 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.208972 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.208979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.208985 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.208992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209002 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.209013 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.209024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209035 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.209046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209099 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.209112 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.209123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209134 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.209145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209156 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.209166 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.209178 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209188 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.209209 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209221 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.209231 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:11.209242 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209254 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-19 03:29:11.209263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-19 03:29:11.209272 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-19 03:29:11.209281 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 03:29:11.209291 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-19 03:29:11.209302 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:29:11.209313 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-19 03:29:11.209324 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-19 03:29:11.209335 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-19 03:29:11.209346 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-19 03:29:11.209358 | orchestrator | 2026-03-19 03:29:11.209368 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-19 03:29:11.209384 | orchestrator | Thursday 19 March 2026 03:28:54 +0000 (0:00:01.723) 0:00:51.897 ******** 2026-03-19 03:29:11.209391 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:11.209406 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209413 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:11.209420 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209426 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:11.209433 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209440 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:11.209446 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209453 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:11.209460 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-19 03:29:11.209467 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:11.209473 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-19 03:29:11.209480 | orchestrator | 2026-03-19 03:29:11.209487 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-19 03:29:11.209503 | orchestrator | Thursday 19 March 2026 03:29:11 +0000 (0:00:16.956) 0:01:08.854 ******** 2026-03-19 03:29:27.976319 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976407 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.976418 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976425 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.976432 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976438 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.976445 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976451 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.976457 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976463 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.976470 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-19 03:29:27.976476 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.976482 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-19 03:29:27.976489 | orchestrator | 2026-03-19 03:29:27.976496 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-19 03:29:27.976502 | orchestrator | Thursday 19 March 2026 03:29:14 +0000 (0:00:02.824) 0:01:11.679 ******** 2026-03-19 03:29:27.976509 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976517 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.976524 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976530 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.976536 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976543 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.976565 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976572 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.976579 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-19 03:29:27.976595 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976602 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.976608 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-19 03:29:27.976615 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.976621 | orchestrator | 2026-03-19 03:29:27.976627 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-19 03:29:27.976633 | orchestrator | Thursday 19 March 2026 03:29:15 +0000 (0:00:01.834) 0:01:13.513 ******** 2026-03-19 03:29:27.976640 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 03:29:27.976646 | orchestrator | 2026-03-19 03:29:27.976652 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-19 03:29:27.976659 | orchestrator | Thursday 19 March 2026 03:29:16 +0000 (0:00:00.742) 0:01:14.256 ******** 2026-03-19 03:29:27.976665 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:29:27.976672 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.976678 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.976684 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.976690 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.976696 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.976702 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.976709 | orchestrator | 2026-03-19 03:29:27.976715 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-19 03:29:27.976721 | orchestrator | Thursday 19 March 2026 03:29:17 +0000 (0:00:00.758) 0:01:15.015 ******** 2026-03-19 03:29:27.976727 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:29:27.976734 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.976740 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.976755 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.976762 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:29:27.976768 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:29:27.976778 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:29:27.976788 | orchestrator | 2026-03-19 03:29:27.976799 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-19 03:29:27.976810 | orchestrator | Thursday 19 March 2026 03:29:19 +0000 (0:00:02.238) 0:01:17.253 ******** 2026-03-19 03:29:27.976817 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976829 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976836 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976842 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976848 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:29:27.976859 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.976877 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.976885 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.976892 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976899 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.976906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976913 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.976920 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-19 03:29:27.976933 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.976940 | orchestrator | 2026-03-19 03:29:27.976948 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-19 03:29:27.976955 | orchestrator | Thursday 19 March 2026 03:29:21 +0000 (0:00:01.557) 0:01:18.811 ******** 2026-03-19 03:29:27.976962 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.976969 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.976976 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.976982 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.976989 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.976996 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.977003 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.977010 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.977017 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.977024 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.977031 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-19 03:29:27.977038 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-19 03:29:27.977045 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.977052 | orchestrator | 2026-03-19 03:29:27.977058 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-19 03:29:27.977065 | orchestrator | Thursday 19 March 2026 03:29:22 +0000 (0:00:01.392) 0:01:20.203 ******** 2026-03-19 03:29:27.977072 | orchestrator | [WARNING]: Skipped 2026-03-19 03:29:27.977082 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-19 03:29:27.977131 | orchestrator | due to this access issue: 2026-03-19 03:29:27.977143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-19 03:29:27.977150 | orchestrator | not a directory 2026-03-19 03:29:27.977157 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 03:29:27.977164 | orchestrator | 2026-03-19 03:29:27.977171 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-19 03:29:27.977178 | orchestrator | Thursday 19 March 2026 03:29:23 +0000 (0:00:01.123) 0:01:21.327 ******** 2026-03-19 03:29:27.977185 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:29:27.977192 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.977199 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.977206 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.977213 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.977223 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.977234 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.977250 | orchestrator | 2026-03-19 03:29:27.977260 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-19 03:29:27.977271 | orchestrator | Thursday 19 March 2026 03:29:24 +0000 (0:00:00.921) 0:01:22.248 ******** 2026-03-19 03:29:27.977280 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:29:27.977290 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:29:27.977300 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:29:27.977309 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:29:27.977318 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:29:27.977327 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:29:27.977337 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:29:27.977348 | orchestrator | 2026-03-19 03:29:27.977357 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-19 03:29:27.977375 | orchestrator | Thursday 19 March 2026 03:29:25 +0000 (0:00:00.904) 0:01:23.153 ******** 2026-03-19 03:29:27.977389 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-19 03:29:27.977412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798962 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-19 03:29:29.798994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:29:29.799009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:29:29.799014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:29:29.799019 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:29:29.799027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:29:29.799031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:29:29.799039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:29:29.799044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:29:29.799049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:29:29.799058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745154 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-19 03:30:11.745265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:30:11.745300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-19 03:30:11.745309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:30:11.745318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:30:11.745344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-19 03:30:11.745353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 03:30:11.745388 | orchestrator | 2026-03-19 03:30:11.745397 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-19 03:30:11.745407 | orchestrator | Thursday 19 March 2026 03:29:29 +0000 (0:00:04.302) 0:01:27.455 ******** 2026-03-19 03:30:11.745415 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 03:30:11.745424 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:30:11.745432 | orchestrator | 2026-03-19 03:30:11.745441 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745449 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:01.278) 0:01:28.733 ******** 2026-03-19 03:30:11.745458 | orchestrator | 2026-03-19 03:30:11.745466 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745474 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.255) 0:01:28.989 ******** 2026-03-19 03:30:11.745482 | orchestrator | 2026-03-19 03:30:11.745491 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745499 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.087) 0:01:29.076 ******** 2026-03-19 03:30:11.745507 | orchestrator | 2026-03-19 03:30:11.745516 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745524 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.074) 0:01:29.150 ******** 2026-03-19 03:30:11.745531 | orchestrator | 2026-03-19 03:30:11.745539 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745547 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.068) 0:01:29.219 ******** 2026-03-19 03:30:11.745555 | orchestrator | 2026-03-19 03:30:11.745564 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745572 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.068) 0:01:29.287 ******** 2026-03-19 03:30:11.745581 | orchestrator | 2026-03-19 03:30:11.745589 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-19 03:30:11.745597 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.069) 0:01:29.357 ******** 2026-03-19 03:30:11.745606 | orchestrator | 2026-03-19 03:30:11.745614 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-19 03:30:11.745622 | orchestrator | Thursday 19 March 2026 03:29:31 +0000 (0:00:00.096) 0:01:29.453 ******** 2026-03-19 03:30:11.745630 | orchestrator | changed: [testbed-manager] 2026-03-19 03:30:11.745639 | orchestrator | 2026-03-19 03:30:11.745647 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-19 03:30:11.745656 | orchestrator | Thursday 19 March 2026 03:29:59 +0000 (0:00:27.600) 0:01:57.053 ******** 2026-03-19 03:30:11.745664 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:30:11.745672 | orchestrator | changed: [testbed-manager] 2026-03-19 03:30:11.745685 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:31:22.119042 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:31:22.119200 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:31:22.119218 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:31:22.119230 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:31:22.119242 | orchestrator | 2026-03-19 03:31:22.119329 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-19 03:31:22.119378 | orchestrator | Thursday 19 March 2026 03:30:11 +0000 (0:00:12.338) 0:02:09.392 ******** 2026-03-19 03:31:22.119390 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:31:22.119402 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:31:22.119413 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:31:22.119424 | orchestrator | 2026-03-19 03:31:22.119435 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-19 03:31:22.119448 | orchestrator | Thursday 19 March 2026 03:30:21 +0000 (0:00:10.017) 0:02:19.409 ******** 2026-03-19 03:31:22.119458 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:31:22.119469 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:31:22.119479 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:31:22.119491 | orchestrator | 2026-03-19 03:31:22.119502 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-19 03:31:22.119513 | orchestrator | Thursday 19 March 2026 03:30:32 +0000 (0:00:10.587) 0:02:29.997 ******** 2026-03-19 03:31:22.119525 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:31:22.119536 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:31:22.119547 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:31:22.119559 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:31:22.119570 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:31:22.119581 | orchestrator | changed: [testbed-manager] 2026-03-19 03:31:22.119592 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:31:22.119604 | orchestrator | 2026-03-19 03:31:22.119616 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-19 03:31:22.119627 | orchestrator | Thursday 19 March 2026 03:30:46 +0000 (0:00:14.391) 0:02:44.389 ******** 2026-03-19 03:31:22.119639 | orchestrator | changed: [testbed-manager] 2026-03-19 03:31:22.119650 | orchestrator | 2026-03-19 03:31:22.119682 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-19 03:31:22.119695 | orchestrator | Thursday 19 March 2026 03:31:00 +0000 (0:00:14.057) 0:02:58.446 ******** 2026-03-19 03:31:22.119706 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:31:22.119718 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:31:22.119729 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:31:22.119741 | orchestrator | 2026-03-19 03:31:22.119752 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-19 03:31:22.119763 | orchestrator | Thursday 19 March 2026 03:31:05 +0000 (0:00:05.145) 0:03:03.592 ******** 2026-03-19 03:31:22.119774 | orchestrator | changed: [testbed-manager] 2026-03-19 03:31:22.119784 | orchestrator | 2026-03-19 03:31:22.119794 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-19 03:31:22.119806 | orchestrator | Thursday 19 March 2026 03:31:11 +0000 (0:00:05.723) 0:03:09.315 ******** 2026-03-19 03:31:22.119816 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:31:22.119827 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:31:22.119836 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:31:22.119845 | orchestrator | 2026-03-19 03:31:22.119854 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:31:22.119865 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 03:31:22.119877 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 03:31:22.119888 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 03:31:22.119899 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-19 03:31:22.119910 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 03:31:22.119930 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 03:31:22.119941 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-19 03:31:22.119952 | orchestrator | 2026-03-19 03:31:22.119964 | orchestrator | 2026-03-19 03:31:22.119974 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:31:22.119985 | orchestrator | Thursday 19 March 2026 03:31:21 +0000 (0:00:09.894) 0:03:19.209 ******** 2026-03-19 03:31:22.119996 | orchestrator | =============================================================================== 2026-03-19 03:31:22.120005 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 27.60s 2026-03-19 03:31:22.120014 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.62s 2026-03-19 03:31:22.120023 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.96s 2026-03-19 03:31:22.120034 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.39s 2026-03-19 03:31:22.120044 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 14.06s 2026-03-19 03:31:22.120055 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.34s 2026-03-19 03:31:22.120089 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.59s 2026-03-19 03:31:22.120101 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.02s 2026-03-19 03:31:22.120112 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.89s 2026-03-19 03:31:22.120124 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.38s 2026-03-19 03:31:22.120135 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.87s 2026-03-19 03:31:22.120145 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.72s 2026-03-19 03:31:22.120157 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.15s 2026-03-19 03:31:22.120167 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.30s 2026-03-19 03:31:22.120178 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.84s 2026-03-19 03:31:22.120189 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.82s 2026-03-19 03:31:22.120200 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.24s 2026-03-19 03:31:22.120212 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.00s 2026-03-19 03:31:22.120223 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.83s 2026-03-19 03:31:22.120234 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.72s 2026-03-19 03:31:25.391629 | orchestrator | 2026-03-19 03:31:25 | INFO  | Task 6b7672d0-49b3-4d13-b7be-0ba12de83c9d (grafana) was prepared for execution. 2026-03-19 03:31:25.391702 | orchestrator | 2026-03-19 03:31:25 | INFO  | It takes a moment until task 6b7672d0-49b3-4d13-b7be-0ba12de83c9d (grafana) has been started and output is visible here. 2026-03-19 03:31:35.639582 | orchestrator | 2026-03-19 03:31:35.639715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:31:35.639725 | orchestrator | 2026-03-19 03:31:35.639729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:31:35.639735 | orchestrator | Thursday 19 March 2026 03:31:29 +0000 (0:00:00.292) 0:00:00.292 ******** 2026-03-19 03:31:35.639739 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:31:35.639744 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:31:35.639748 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:31:35.639752 | orchestrator | 2026-03-19 03:31:35.639756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:31:35.639759 | orchestrator | Thursday 19 March 2026 03:31:30 +0000 (0:00:00.299) 0:00:00.592 ******** 2026-03-19 03:31:35.639783 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-19 03:31:35.639788 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-19 03:31:35.639792 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-19 03:31:35.639796 | orchestrator | 2026-03-19 03:31:35.639800 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-19 03:31:35.639803 | orchestrator | 2026-03-19 03:31:35.639807 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-19 03:31:35.639811 | orchestrator | Thursday 19 March 2026 03:31:30 +0000 (0:00:00.455) 0:00:01.048 ******** 2026-03-19 03:31:35.639816 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:31:35.639820 | orchestrator | 2026-03-19 03:31:35.639824 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-19 03:31:35.639828 | orchestrator | Thursday 19 March 2026 03:31:31 +0000 (0:00:00.564) 0:00:01.613 ******** 2026-03-19 03:31:35.639835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639851 | orchestrator | 2026-03-19 03:31:35.639855 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-19 03:31:35.639859 | orchestrator | Thursday 19 March 2026 03:31:32 +0000 (0:00:00.959) 0:00:02.572 ******** 2026-03-19 03:31:35.639863 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-19 03:31:35.639868 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-19 03:31:35.639872 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:31:35.639876 | orchestrator | 2026-03-19 03:31:35.639880 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-19 03:31:35.639883 | orchestrator | Thursday 19 March 2026 03:31:32 +0000 (0:00:00.877) 0:00:03.449 ******** 2026-03-19 03:31:35.639887 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:31:35.639896 | orchestrator | 2026-03-19 03:31:35.639900 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-19 03:31:35.639903 | orchestrator | Thursday 19 March 2026 03:31:33 +0000 (0:00:00.582) 0:00:04.032 ******** 2026-03-19 03:31:35.639926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:35.639938 | orchestrator | 2026-03-19 03:31:35.639942 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-19 03:31:35.639946 | orchestrator | Thursday 19 March 2026 03:31:35 +0000 (0:00:01.527) 0:00:05.560 ******** 2026-03-19 03:31:35.639950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:35.639954 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:31:35.639958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:35.639966 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:31:35.639979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:42.541426 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:31:42.541576 | orchestrator | 2026-03-19 03:31:42.541597 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-19 03:31:42.541611 | orchestrator | Thursday 19 March 2026 03:31:35 +0000 (0:00:00.585) 0:00:06.145 ******** 2026-03-19 03:31:42.541625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:42.541639 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:31:42.541652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:42.541659 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:31:42.541666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-19 03:31:42.541672 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:31:42.541679 | orchestrator | 2026-03-19 03:31:42.541685 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-19 03:31:42.541691 | orchestrator | Thursday 19 March 2026 03:31:36 +0000 (0:00:00.600) 0:00:06.746 ******** 2026-03-19 03:31:42.541698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541769 | orchestrator | 2026-03-19 03:31:42.541775 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-19 03:31:42.541781 | orchestrator | Thursday 19 March 2026 03:31:37 +0000 (0:00:01.374) 0:00:08.120 ******** 2026-03-19 03:31:42.541788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:31:42.541814 | orchestrator | 2026-03-19 03:31:42.541820 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-19 03:31:42.541826 | orchestrator | Thursday 19 March 2026 03:31:39 +0000 (0:00:01.659) 0:00:09.779 ******** 2026-03-19 03:31:42.541832 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:31:42.541839 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:31:42.541845 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:31:42.541851 | orchestrator | 2026-03-19 03:31:42.541857 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-19 03:31:42.541863 | orchestrator | Thursday 19 March 2026 03:31:39 +0000 (0:00:00.320) 0:00:10.100 ******** 2026-03-19 03:31:42.541869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 03:31:42.541877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 03:31:42.541883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-19 03:31:42.541890 | orchestrator | 2026-03-19 03:31:42.541897 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-19 03:31:42.541904 | orchestrator | Thursday 19 March 2026 03:31:40 +0000 (0:00:01.243) 0:00:11.343 ******** 2026-03-19 03:31:42.541912 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 03:31:42.541924 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 03:31:42.541931 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-19 03:31:42.541938 | orchestrator | 2026-03-19 03:31:42.541945 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-19 03:31:42.541958 | orchestrator | Thursday 19 March 2026 03:31:42 +0000 (0:00:01.693) 0:00:13.037 ******** 2026-03-19 03:31:49.142827 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:31:49.142918 | orchestrator | 2026-03-19 03:31:49.142928 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-19 03:31:49.142937 | orchestrator | Thursday 19 March 2026 03:31:43 +0000 (0:00:00.774) 0:00:13.811 ******** 2026-03-19 03:31:49.142944 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-19 03:31:49.142952 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-19 03:31:49.142959 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:31:49.142966 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:31:49.142973 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:31:49.142979 | orchestrator | 2026-03-19 03:31:49.142987 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-19 03:31:49.142994 | orchestrator | Thursday 19 March 2026 03:31:44 +0000 (0:00:00.734) 0:00:14.545 ******** 2026-03-19 03:31:49.143000 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:31:49.143006 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:31:49.143013 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:31:49.143019 | orchestrator | 2026-03-19 03:31:49.143025 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-19 03:31:49.143032 | orchestrator | Thursday 19 March 2026 03:31:44 +0000 (0:00:00.332) 0:00:14.878 ******** 2026-03-19 03:31:49.143042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312901, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312901, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1312901, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312972, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312972, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1312972, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312921, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2938948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312921, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2938948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1312921, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2938948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312974, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.303944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312974, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.303944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:49.143185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1312974, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.303944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312941, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2977402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312941, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2977402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1312941, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2977402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312962, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312962, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.390978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1312962, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312898, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2894094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312898, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2894094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1312898, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2894094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312910, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312910, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1312910, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2914252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:53.391072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312925, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2939441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312925, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2939441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1312925, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2939441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312952, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2989314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312952, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2989314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1312952, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2989314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312969, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312969, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1312969, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3019443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312913, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2931683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312913, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2931683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1312913, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2931683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312960, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3001301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:31:57.488582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312960, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3001301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1312960, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3001301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312946, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2987878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312946, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2987878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312933, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.297167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1312946, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.2987878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312933, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.297167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312929, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1312933, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.297167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312929, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312954, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.299936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1312929, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312954, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.299936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:01.569578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312926, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1312954, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.299936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312926, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312967, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1312926, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.294944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312967, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313095, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.332964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1312967, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3009763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313095, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.332964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312998, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.315239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312998, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.315239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313095, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.332964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312988, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3086982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:05.853611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312988, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3086982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1312998, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.315239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313017, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313017, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1312988, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3086982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312978, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3063092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.910980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312978, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3063092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313017, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313057, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3255243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313057, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3255243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313018, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3232524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1312978, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3063092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313018, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3232524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:09.911067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313064, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3259442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.085964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313057, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3255243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313064, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3259442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313091, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3319037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313018, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3232524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313091, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3319037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313051, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3246167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313064, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3259442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313051, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3246167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313011, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3168192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313091, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3319037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313011, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3168192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312995, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3119178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:14.086265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313051, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3246167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.415957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312995, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3119178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313007, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3160093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313011, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3168192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313007, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3160093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312992, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3099444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312992, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3099444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1312995, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3119178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313014, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313014, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313007, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3160093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313080, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3309445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313080, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3309445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:18.416134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1312992, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3099444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3289292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3289292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313014, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3169444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312982, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312982, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313080, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3309445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312986, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312986, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3289292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313046, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3238926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313046, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3238926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.496987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1312982, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:32:22.497003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313070, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3275058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:33:58.771902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313070, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3275058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:33:58.772023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1312986, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3069441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:33:58.772038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313046, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3238926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:33:58.772049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313070, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773883900.3275058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-19 03:33:58.772058 | orchestrator | 2026-03-19 03:33:58.772069 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-19 03:33:58.772079 | orchestrator | Thursday 19 March 2026 03:32:25 +0000 (0:00:40.872) 0:00:55.751 ******** 2026-03-19 03:33:58.772089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:33:58.772132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:33:58.772143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-19 03:33:58.772152 | orchestrator | 2026-03-19 03:33:58.772161 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-19 03:33:58.772170 | orchestrator | Thursday 19 March 2026 03:32:26 +0000 (0:00:01.025) 0:00:56.776 ******** 2026-03-19 03:33:58.772179 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:33:58.772189 | orchestrator | 2026-03-19 03:33:58.772197 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-19 03:33:58.772210 | orchestrator | Thursday 19 March 2026 03:32:28 +0000 (0:00:02.665) 0:00:59.442 ******** 2026-03-19 03:33:58.772219 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:33:58.772228 | orchestrator | 2026-03-19 03:33:58.772253 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 03:33:58.772272 | orchestrator | Thursday 19 March 2026 03:32:31 +0000 (0:00:02.199) 0:01:01.641 ******** 2026-03-19 03:33:58.772281 | orchestrator | 2026-03-19 03:33:58.772289 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 03:33:58.772298 | orchestrator | Thursday 19 March 2026 03:32:31 +0000 (0:00:00.080) 0:01:01.722 ******** 2026-03-19 03:33:58.772306 | orchestrator | 2026-03-19 03:33:58.772315 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-19 03:33:58.772323 | orchestrator | Thursday 19 March 2026 03:32:31 +0000 (0:00:00.071) 0:01:01.793 ******** 2026-03-19 03:33:58.772332 | orchestrator | 2026-03-19 03:33:58.772340 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-19 03:33:58.772349 | orchestrator | Thursday 19 March 2026 03:32:31 +0000 (0:00:00.073) 0:01:01.867 ******** 2026-03-19 03:33:58.772358 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:33:58.772367 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:33:58.772375 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:33:58.772384 | orchestrator | 2026-03-19 03:33:58.772393 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-19 03:33:58.772426 | orchestrator | Thursday 19 March 2026 03:32:33 +0000 (0:00:02.177) 0:01:04.044 ******** 2026-03-19 03:33:58.772444 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:33:58.772455 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:33:58.772465 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-19 03:33:58.772477 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-19 03:33:58.772494 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-19 03:33:58.772504 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-19 03:33:58.772514 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:33:58.772524 | orchestrator | 2026-03-19 03:33:58.772534 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-19 03:33:58.772544 | orchestrator | Thursday 19 March 2026 03:33:25 +0000 (0:00:52.125) 0:01:56.169 ******** 2026-03-19 03:33:58.772554 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:33:58.772564 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:33:58.772574 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:33:58.772583 | orchestrator | 2026-03-19 03:33:58.772593 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-19 03:33:58.772603 | orchestrator | Thursday 19 March 2026 03:33:52 +0000 (0:00:27.270) 0:02:23.440 ******** 2026-03-19 03:33:58.772613 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:33:58.772623 | orchestrator | 2026-03-19 03:33:58.772632 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-19 03:33:58.772642 | orchestrator | Thursday 19 March 2026 03:33:55 +0000 (0:00:02.616) 0:02:26.056 ******** 2026-03-19 03:33:58.772651 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:33:58.772661 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:33:58.772670 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:33:58.772680 | orchestrator | 2026-03-19 03:33:58.772690 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-19 03:33:58.772700 | orchestrator | Thursday 19 March 2026 03:33:55 +0000 (0:00:00.307) 0:02:26.364 ******** 2026-03-19 03:33:58.772712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-19 03:33:58.772731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-19 03:33:59.452909 | orchestrator | 2026-03-19 03:33:59.453017 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-19 03:33:59.453036 | orchestrator | Thursday 19 March 2026 03:33:58 +0000 (0:00:02.904) 0:02:29.268 ******** 2026-03-19 03:33:59.453048 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:33:59.453060 | orchestrator | 2026-03-19 03:33:59.453069 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:33:59.453076 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:33:59.453085 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:33:59.453092 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 03:33:59.453098 | orchestrator | 2026-03-19 03:33:59.453104 | orchestrator | 2026-03-19 03:33:59.453111 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:33:59.453118 | orchestrator | Thursday 19 March 2026 03:33:59 +0000 (0:00:00.271) 0:02:29.540 ******** 2026-03-19 03:33:59.453139 | orchestrator | =============================================================================== 2026-03-19 03:33:59.453146 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 52.13s 2026-03-19 03:33:59.453173 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.87s 2026-03-19 03:33:59.453179 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.27s 2026-03-19 03:33:59.453186 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.90s 2026-03-19 03:33:59.453192 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.67s 2026-03-19 03:33:59.453198 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.62s 2026-03-19 03:33:59.453204 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.20s 2026-03-19 03:33:59.453210 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.18s 2026-03-19 03:33:59.453217 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.69s 2026-03-19 03:33:59.453223 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.66s 2026-03-19 03:33:59.453229 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.53s 2026-03-19 03:33:59.453235 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2026-03-19 03:33:59.453241 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-03-19 03:33:59.453247 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-03-19 03:33:59.453253 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.96s 2026-03-19 03:33:59.453259 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-03-19 03:33:59.453265 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-03-19 03:33:59.453271 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-03-19 03:33:59.453278 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.60s 2026-03-19 03:33:59.453284 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.59s 2026-03-19 03:33:59.824876 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-19 03:33:59.834755 | orchestrator | + set -e 2026-03-19 03:33:59.834892 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:33:59.834904 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:33:59.834913 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:33:59.834928 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:33:59.834936 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:33:59.834943 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 03:33:59.834949 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 03:33:59.834956 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 03:33:59.834963 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 03:33:59.834969 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 03:33:59.834976 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 03:33:59.834983 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 03:33:59.834990 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:33:59.834998 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:33:59.835005 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 03:33:59.835011 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 03:33:59.835018 | orchestrator | ++ export ARA=false 2026-03-19 03:33:59.835025 | orchestrator | ++ ARA=false 2026-03-19 03:33:59.835032 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 03:33:59.835038 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 03:33:59.835045 | orchestrator | ++ export TEMPEST=false 2026-03-19 03:33:59.835052 | orchestrator | ++ TEMPEST=false 2026-03-19 03:33:59.835058 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 03:33:59.835065 | orchestrator | ++ IS_ZUUL=true 2026-03-19 03:33:59.835072 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:33:59.835078 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:33:59.835085 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 03:33:59.835092 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 03:33:59.835098 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 03:33:59.835105 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 03:33:59.835111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 03:33:59.835118 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 03:33:59.835154 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 03:33:59.835162 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 03:33:59.836108 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-19 03:33:59.907957 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 03:33:59.908045 | orchestrator | + osism apply clusterapi 2026-03-19 03:34:01.947164 | orchestrator | 2026-03-19 03:34:01 | INFO  | Task 689c1abe-38b0-4e7f-9f0c-3b40207ca1c2 (clusterapi) was prepared for execution. 2026-03-19 03:34:01.947249 | orchestrator | 2026-03-19 03:34:01 | INFO  | It takes a moment until task 689c1abe-38b0-4e7f-9f0c-3b40207ca1c2 (clusterapi) has been started and output is visible here. 2026-03-19 03:34:58.719196 | orchestrator | 2026-03-19 03:34:58.719265 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-19 03:34:58.719276 | orchestrator | 2026-03-19 03:34:58.719283 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-19 03:34:58.719290 | orchestrator | Thursday 19 March 2026 03:34:06 +0000 (0:00:00.215) 0:00:00.215 ******** 2026-03-19 03:34:58.719297 | orchestrator | included: cert_manager for testbed-manager 2026-03-19 03:34:58.719304 | orchestrator | 2026-03-19 03:34:58.719310 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-19 03:34:58.719317 | orchestrator | Thursday 19 March 2026 03:34:06 +0000 (0:00:00.252) 0:00:00.468 ******** 2026-03-19 03:34:58.719324 | orchestrator | changed: [testbed-manager] 2026-03-19 03:34:58.719331 | orchestrator | 2026-03-19 03:34:58.719337 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-19 03:34:58.719344 | orchestrator | Thursday 19 March 2026 03:34:12 +0000 (0:00:05.448) 0:00:05.916 ******** 2026-03-19 03:34:58.719350 | orchestrator | changed: [testbed-manager] 2026-03-19 03:34:58.719356 | orchestrator | 2026-03-19 03:34:58.719363 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-19 03:34:58.719369 | orchestrator | 2026-03-19 03:34:58.719375 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-19 03:34:58.719382 | orchestrator | Thursday 19 March 2026 03:34:37 +0000 (0:00:25.504) 0:00:31.420 ******** 2026-03-19 03:34:58.719400 | orchestrator | ok: [testbed-manager] 2026-03-19 03:34:58.719407 | orchestrator | 2026-03-19 03:34:58.719426 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-19 03:34:58.719430 | orchestrator | Thursday 19 March 2026 03:34:38 +0000 (0:00:01.134) 0:00:32.555 ******** 2026-03-19 03:34:58.719434 | orchestrator | ok: [testbed-manager] 2026-03-19 03:34:58.719438 | orchestrator | 2026-03-19 03:34:58.719441 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-19 03:34:58.719446 | orchestrator | Thursday 19 March 2026 03:34:39 +0000 (0:00:00.174) 0:00:32.729 ******** 2026-03-19 03:34:58.719492 | orchestrator | ok: [testbed-manager] 2026-03-19 03:34:58.719496 | orchestrator | 2026-03-19 03:34:58.719500 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-19 03:34:58.719504 | orchestrator | Thursday 19 March 2026 03:34:56 +0000 (0:00:17.078) 0:00:49.807 ******** 2026-03-19 03:34:58.719508 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:34:58.719512 | orchestrator | 2026-03-19 03:34:58.719515 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-19 03:34:58.719519 | orchestrator | Thursday 19 March 2026 03:34:56 +0000 (0:00:00.136) 0:00:49.944 ******** 2026-03-19 03:34:58.719523 | orchestrator | changed: [testbed-manager] 2026-03-19 03:34:58.719527 | orchestrator | 2026-03-19 03:34:58.719530 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:34:58.719535 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 03:34:58.719539 | orchestrator | 2026-03-19 03:34:58.719543 | orchestrator | 2026-03-19 03:34:58.719547 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:34:58.719550 | orchestrator | Thursday 19 March 2026 03:34:58 +0000 (0:00:02.098) 0:00:52.043 ******** 2026-03-19 03:34:58.719554 | orchestrator | =============================================================================== 2026-03-19 03:34:58.719569 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 25.50s 2026-03-19 03:34:58.719573 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.08s 2026-03-19 03:34:58.719577 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.45s 2026-03-19 03:34:58.719580 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.10s 2026-03-19 03:34:58.719584 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.13s 2026-03-19 03:34:58.719588 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-03-19 03:34:58.719591 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-03-19 03:34:58.719595 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-03-19 03:34:59.055110 | orchestrator | + osism apply magnum 2026-03-19 03:35:01.162575 | orchestrator | 2026-03-19 03:35:01 | INFO  | Task c4caf4ce-c61a-444c-8e08-91e72176e37c (magnum) was prepared for execution. 2026-03-19 03:35:01.162646 | orchestrator | 2026-03-19 03:35:01 | INFO  | It takes a moment until task c4caf4ce-c61a-444c-8e08-91e72176e37c (magnum) has been started and output is visible here. 2026-03-19 03:35:47.907080 | orchestrator | 2026-03-19 03:35:47.907176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:35:47.907187 | orchestrator | 2026-03-19 03:35:47.907196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:35:47.907203 | orchestrator | Thursday 19 March 2026 03:35:05 +0000 (0:00:00.308) 0:00:00.308 ******** 2026-03-19 03:35:47.907210 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:35:47.907218 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:35:47.907224 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:35:47.907230 | orchestrator | 2026-03-19 03:35:47.907237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:35:47.907243 | orchestrator | Thursday 19 March 2026 03:35:05 +0000 (0:00:00.353) 0:00:00.662 ******** 2026-03-19 03:35:47.907250 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-19 03:35:47.907256 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-19 03:35:47.907263 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-19 03:35:47.907269 | orchestrator | 2026-03-19 03:35:47.907275 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-19 03:35:47.907281 | orchestrator | 2026-03-19 03:35:47.907288 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 03:35:47.907294 | orchestrator | Thursday 19 March 2026 03:35:06 +0000 (0:00:00.483) 0:00:01.146 ******** 2026-03-19 03:35:47.907300 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:35:47.907307 | orchestrator | 2026-03-19 03:35:47.907313 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-19 03:35:47.907319 | orchestrator | Thursday 19 March 2026 03:35:06 +0000 (0:00:00.556) 0:00:01.703 ******** 2026-03-19 03:35:47.907326 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-19 03:35:47.907333 | orchestrator | 2026-03-19 03:35:47.907339 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-19 03:35:47.907345 | orchestrator | Thursday 19 March 2026 03:35:10 +0000 (0:00:04.017) 0:00:05.720 ******** 2026-03-19 03:35:47.907352 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-19 03:35:47.907358 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-19 03:35:47.907365 | orchestrator | 2026-03-19 03:35:47.907371 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-19 03:35:47.907377 | orchestrator | Thursday 19 March 2026 03:35:18 +0000 (0:00:07.364) 0:00:13.085 ******** 2026-03-19 03:35:47.907405 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-19 03:35:47.907411 | orchestrator | 2026-03-19 03:35:47.907429 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-19 03:35:47.907436 | orchestrator | Thursday 19 March 2026 03:35:22 +0000 (0:00:03.970) 0:00:17.055 ******** 2026-03-19 03:35:47.907442 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-19 03:35:47.907449 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-19 03:35:47.907455 | orchestrator | 2026-03-19 03:35:47.907502 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-19 03:35:47.907510 | orchestrator | Thursday 19 March 2026 03:35:26 +0000 (0:00:04.264) 0:00:21.319 ******** 2026-03-19 03:35:47.907517 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-19 03:35:47.907523 | orchestrator | 2026-03-19 03:35:47.907529 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-19 03:35:47.907535 | orchestrator | Thursday 19 March 2026 03:35:30 +0000 (0:00:03.551) 0:00:24.871 ******** 2026-03-19 03:35:47.907542 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-19 03:35:47.907548 | orchestrator | 2026-03-19 03:35:47.907554 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-19 03:35:47.907561 | orchestrator | Thursday 19 March 2026 03:35:34 +0000 (0:00:04.100) 0:00:28.971 ******** 2026-03-19 03:35:47.907568 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:35:47.907575 | orchestrator | 2026-03-19 03:35:47.907582 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-19 03:35:47.907589 | orchestrator | Thursday 19 March 2026 03:35:38 +0000 (0:00:03.840) 0:00:32.812 ******** 2026-03-19 03:35:47.907596 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:35:47.907603 | orchestrator | 2026-03-19 03:35:47.907610 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-19 03:35:47.907618 | orchestrator | Thursday 19 March 2026 03:35:42 +0000 (0:00:04.215) 0:00:37.027 ******** 2026-03-19 03:35:47.907625 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:35:47.907632 | orchestrator | 2026-03-19 03:35:47.907639 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-19 03:35:47.907646 | orchestrator | Thursday 19 March 2026 03:35:46 +0000 (0:00:03.968) 0:00:40.995 ******** 2026-03-19 03:35:47.907671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:47.907683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:47.907700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:47.907708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:47.907715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:47.907727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:55.103035 | orchestrator | 2026-03-19 03:35:55.103199 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-19 03:35:55.103238 | orchestrator | Thursday 19 March 2026 03:35:47 +0000 (0:00:01.636) 0:00:42.632 ******** 2026-03-19 03:35:55.103246 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:35:55.103255 | orchestrator | 2026-03-19 03:35:55.103261 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-19 03:35:55.103267 | orchestrator | Thursday 19 March 2026 03:35:48 +0000 (0:00:00.140) 0:00:42.773 ******** 2026-03-19 03:35:55.103274 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:35:55.103303 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:35:55.103310 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:35:55.103316 | orchestrator | 2026-03-19 03:35:55.103322 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-19 03:35:55.103327 | orchestrator | Thursday 19 March 2026 03:35:48 +0000 (0:00:00.305) 0:00:43.078 ******** 2026-03-19 03:35:55.103333 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 03:35:55.103340 | orchestrator | 2026-03-19 03:35:55.103346 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-19 03:35:55.103352 | orchestrator | Thursday 19 March 2026 03:35:49 +0000 (0:00:00.858) 0:00:43.936 ******** 2026-03-19 03:35:55.103377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:55.103390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:55.103397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:55.103432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:55.103456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:55.103522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:55.103532 | orchestrator | 2026-03-19 03:35:55.103541 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-19 03:35:55.103547 | orchestrator | Thursday 19 March 2026 03:35:51 +0000 (0:00:02.427) 0:00:46.364 ******** 2026-03-19 03:35:55.103552 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:35:55.103559 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:35:55.103564 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:35:55.103569 | orchestrator | 2026-03-19 03:35:55.103575 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 03:35:55.103581 | orchestrator | Thursday 19 March 2026 03:35:52 +0000 (0:00:00.424) 0:00:46.789 ******** 2026-03-19 03:35:55.103587 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:35:55.103593 | orchestrator | 2026-03-19 03:35:55.103599 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-19 03:35:55.103605 | orchestrator | Thursday 19 March 2026 03:35:52 +0000 (0:00:00.546) 0:00:47.336 ******** 2026-03-19 03:35:55.103612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:55.103628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:56.036218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:56.037099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:56.037141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:56.037151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:35:56.037159 | orchestrator | 2026-03-19 03:35:56.037168 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-19 03:35:56.037198 | orchestrator | Thursday 19 March 2026 03:35:55 +0000 (0:00:02.498) 0:00:49.834 ******** 2026-03-19 03:35:56.037226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:56.037235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:56.037243 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:35:56.037256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:56.037262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:56.037268 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:35:56.037274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:56.037293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:59.676262 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:35:59.676337 | orchestrator | 2026-03-19 03:35:59.676344 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-19 03:35:59.676350 | orchestrator | Thursday 19 March 2026 03:35:56 +0000 (0:00:00.930) 0:00:50.765 ******** 2026-03-19 03:35:59.676355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:59.676375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:59.676380 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:35:59.676384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:59.676405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:59.676409 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:35:59.676425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:35:59.676429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:35:59.676433 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:35:59.676437 | orchestrator | 2026-03-19 03:35:59.676443 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-19 03:35:59.676447 | orchestrator | Thursday 19 March 2026 03:35:56 +0000 (0:00:00.906) 0:00:51.672 ******** 2026-03-19 03:35:59.676452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:59.676461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:35:59.676531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:05.880868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.880996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.881020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.881068 | orchestrator | 2026-03-19 03:36:05.881088 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-19 03:36:05.881108 | orchestrator | Thursday 19 March 2026 03:35:59 +0000 (0:00:02.732) 0:00:54.404 ******** 2026-03-19 03:36:05.881127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:05.881168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:05.881188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:05.881213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.881229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.881259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:05.881276 | orchestrator | 2026-03-19 03:36:05.881292 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-19 03:36:05.881308 | orchestrator | Thursday 19 March 2026 03:36:05 +0000 (0:00:05.461) 0:00:59.866 ******** 2026-03-19 03:36:05.881336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:36:07.796313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:36:07.796406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:36:07.796435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:36:07.796466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:36:07.796561 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:36:07.796572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-19 03:36:07.796595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 03:36:07.796604 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:36:07.796612 | orchestrator | 2026-03-19 03:36:07.796621 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-19 03:36:07.796630 | orchestrator | Thursday 19 March 2026 03:36:05 +0000 (0:00:00.750) 0:01:00.617 ******** 2026-03-19 03:36:07.796644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:07.796660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:07.796669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-19 03:36:07.796677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:07.796692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:55.734100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-19 03:36:55.734267 | orchestrator | 2026-03-19 03:36:55.734280 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-19 03:36:55.734288 | orchestrator | Thursday 19 March 2026 03:36:07 +0000 (0:00:01.910) 0:01:02.528 ******** 2026-03-19 03:36:55.734295 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:36:55.734302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:36:55.734308 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:36:55.734314 | orchestrator | 2026-03-19 03:36:55.734321 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-19 03:36:55.734327 | orchestrator | Thursday 19 March 2026 03:36:08 +0000 (0:00:00.372) 0:01:02.900 ******** 2026-03-19 03:36:55.734333 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:36:55.734339 | orchestrator | 2026-03-19 03:36:55.734345 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-19 03:36:55.734352 | orchestrator | Thursday 19 March 2026 03:36:10 +0000 (0:00:02.490) 0:01:05.391 ******** 2026-03-19 03:36:55.734358 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:36:55.734364 | orchestrator | 2026-03-19 03:36:55.734370 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-19 03:36:55.734376 | orchestrator | Thursday 19 March 2026 03:36:13 +0000 (0:00:02.655) 0:01:08.047 ******** 2026-03-19 03:36:55.734382 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:36:55.734388 | orchestrator | 2026-03-19 03:36:55.734394 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 03:36:55.734400 | orchestrator | Thursday 19 March 2026 03:36:31 +0000 (0:00:17.786) 0:01:25.833 ******** 2026-03-19 03:36:55.734407 | orchestrator | 2026-03-19 03:36:55.734413 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 03:36:55.734419 | orchestrator | Thursday 19 March 2026 03:36:31 +0000 (0:00:00.073) 0:01:25.906 ******** 2026-03-19 03:36:55.734424 | orchestrator | 2026-03-19 03:36:55.734430 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-19 03:36:55.734437 | orchestrator | Thursday 19 March 2026 03:36:31 +0000 (0:00:00.075) 0:01:25.982 ******** 2026-03-19 03:36:55.734443 | orchestrator | 2026-03-19 03:36:55.734450 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-19 03:36:55.734456 | orchestrator | Thursday 19 March 2026 03:36:31 +0000 (0:00:00.072) 0:01:26.055 ******** 2026-03-19 03:36:55.734463 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:36:55.734469 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:36:55.734476 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:36:55.734482 | orchestrator | 2026-03-19 03:36:55.734488 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-19 03:36:55.734494 | orchestrator | Thursday 19 March 2026 03:36:44 +0000 (0:00:13.427) 0:01:39.482 ******** 2026-03-19 03:36:55.734526 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:36:55.734535 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:36:55.734541 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:36:55.734548 | orchestrator | 2026-03-19 03:36:55.734554 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:36:55.734562 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 03:36:55.734570 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:36:55.734577 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-19 03:36:55.734583 | orchestrator | 2026-03-19 03:36:55.734598 | orchestrator | 2026-03-19 03:36:55.734604 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:36:55.734611 | orchestrator | Thursday 19 March 2026 03:36:55 +0000 (0:00:10.577) 0:01:50.059 ******** 2026-03-19 03:36:55.734618 | orchestrator | =============================================================================== 2026-03-19 03:36:55.734624 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.79s 2026-03-19 03:36:55.734631 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.43s 2026-03-19 03:36:55.734638 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.58s 2026-03-19 03:36:55.734644 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.36s 2026-03-19 03:36:55.734651 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.46s 2026-03-19 03:36:55.734658 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.26s 2026-03-19 03:36:55.734665 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.22s 2026-03-19 03:36:55.734689 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.10s 2026-03-19 03:36:55.734697 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.02s 2026-03-19 03:36:55.734704 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.97s 2026-03-19 03:36:55.734710 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.97s 2026-03-19 03:36:55.734717 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.84s 2026-03-19 03:36:55.734723 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.55s 2026-03-19 03:36:55.734730 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.73s 2026-03-19 03:36:55.734745 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.66s 2026-03-19 03:36:55.734752 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.50s 2026-03-19 03:36:55.734758 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.49s 2026-03-19 03:36:55.734765 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.43s 2026-03-19 03:36:55.734772 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.91s 2026-03-19 03:36:55.734779 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.64s 2026-03-19 03:36:56.402692 | orchestrator | ok: Runtime: 1:41:37.401349 2026-03-19 03:36:56.661439 | 2026-03-19 03:36:56.661607 | TASK [Deploy in a nutshell] 2026-03-19 03:36:57.198011 | orchestrator | skipping: Conditional result was False 2026-03-19 03:36:57.222396 | 2026-03-19 03:36:57.222560 | TASK [Bootstrap services] 2026-03-19 03:36:57.950748 | orchestrator | 2026-03-19 03:36:57.950886 | orchestrator | # BOOTSTRAP 2026-03-19 03:36:57.950896 | orchestrator | 2026-03-19 03:36:57.950902 | orchestrator | + set -e 2026-03-19 03:36:57.950907 | orchestrator | + echo 2026-03-19 03:36:57.950912 | orchestrator | + echo '# BOOTSTRAP' 2026-03-19 03:36:57.950919 | orchestrator | + echo 2026-03-19 03:36:57.950940 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-19 03:36:57.962295 | orchestrator | + set -e 2026-03-19 03:36:57.962379 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-19 03:37:00.238536 | orchestrator | 2026-03-19 03:37:00 | INFO  | It takes a moment until task cd3070c3-c21d-4253-b425-e735f0ab6ee6 (flavor-manager) has been started and output is visible here. 2026-03-19 03:37:08.533227 | orchestrator | 2026-03-19 03:37:03 | INFO  | Flavor SCS-1L-1 created 2026-03-19 03:37:08.533466 | orchestrator | 2026-03-19 03:37:04 | INFO  | Flavor SCS-1L-1-5 created 2026-03-19 03:37:08.534242 | orchestrator | 2026-03-19 03:37:04 | INFO  | Flavor SCS-1V-2 created 2026-03-19 03:37:08.534284 | orchestrator | 2026-03-19 03:37:04 | INFO  | Flavor SCS-1V-2-5 created 2026-03-19 03:37:08.534296 | orchestrator | 2026-03-19 03:37:04 | INFO  | Flavor SCS-1V-4 created 2026-03-19 03:37:08.534309 | orchestrator | 2026-03-19 03:37:04 | INFO  | Flavor SCS-1V-4-10 created 2026-03-19 03:37:08.534339 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-1V-8 created 2026-03-19 03:37:08.534353 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-1V-8-20 created 2026-03-19 03:37:08.534379 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-2V-4 created 2026-03-19 03:37:08.534389 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-2V-4-10 created 2026-03-19 03:37:08.534398 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-2V-8 created 2026-03-19 03:37:08.534407 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-2V-8-20 created 2026-03-19 03:37:08.534415 | orchestrator | 2026-03-19 03:37:05 | INFO  | Flavor SCS-2V-16 created 2026-03-19 03:37:08.534423 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-2V-16-50 created 2026-03-19 03:37:08.534432 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-4V-8 created 2026-03-19 03:37:08.534440 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-4V-8-20 created 2026-03-19 03:37:08.534449 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-4V-16 created 2026-03-19 03:37:08.534459 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-4V-16-50 created 2026-03-19 03:37:08.534468 | orchestrator | 2026-03-19 03:37:06 | INFO  | Flavor SCS-4V-32 created 2026-03-19 03:37:08.534477 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-4V-32-100 created 2026-03-19 03:37:08.534486 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-8V-16 created 2026-03-19 03:37:08.534496 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-8V-16-50 created 2026-03-19 03:37:08.534506 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-8V-32 created 2026-03-19 03:37:08.534552 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-8V-32-100 created 2026-03-19 03:37:08.534558 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-16V-32 created 2026-03-19 03:37:08.534564 | orchestrator | 2026-03-19 03:37:07 | INFO  | Flavor SCS-16V-32-100 created 2026-03-19 03:37:08.534570 | orchestrator | 2026-03-19 03:37:08 | INFO  | Flavor SCS-2V-4-20s created 2026-03-19 03:37:08.534576 | orchestrator | 2026-03-19 03:37:08 | INFO  | Flavor SCS-4V-8-50s created 2026-03-19 03:37:08.534581 | orchestrator | 2026-03-19 03:37:08 | INFO  | Flavor SCS-8V-32-100s created 2026-03-19 03:37:10.938722 | orchestrator | 2026-03-19 03:37:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-19 03:37:21.093029 | orchestrator | 2026-03-19 03:37:21 | INFO  | Task d2d457c4-5782-4086-be26-4539398d7abc (bootstrap-basic) was prepared for execution. 2026-03-19 03:37:21.093139 | orchestrator | 2026-03-19 03:37:21 | INFO  | It takes a moment until task d2d457c4-5782-4086-be26-4539398d7abc (bootstrap-basic) has been started and output is visible here. 2026-03-19 03:38:05.666474 | orchestrator | 2026-03-19 03:38:05.666606 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-19 03:38:05.666617 | orchestrator | 2026-03-19 03:38:05.666623 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 03:38:05.666628 | orchestrator | Thursday 19 March 2026 03:37:25 +0000 (0:00:00.075) 0:00:00.075 ******** 2026-03-19 03:38:05.666634 | orchestrator | ok: [localhost] 2026-03-19 03:38:05.666640 | orchestrator | 2026-03-19 03:38:05.666645 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-19 03:38:05.666650 | orchestrator | Thursday 19 March 2026 03:37:27 +0000 (0:00:01.854) 0:00:01.929 ******** 2026-03-19 03:38:05.666655 | orchestrator | ok: [localhost] 2026-03-19 03:38:05.666660 | orchestrator | 2026-03-19 03:38:05.666665 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-19 03:38:05.666670 | orchestrator | Thursday 19 March 2026 03:37:34 +0000 (0:00:07.167) 0:00:09.097 ******** 2026-03-19 03:38:05.666675 | orchestrator | changed: [localhost] 2026-03-19 03:38:05.666681 | orchestrator | 2026-03-19 03:38:05.666686 | orchestrator | TASK [Create public network] *************************************************** 2026-03-19 03:38:05.666691 | orchestrator | Thursday 19 March 2026 03:37:41 +0000 (0:00:06.410) 0:00:15.508 ******** 2026-03-19 03:38:05.666696 | orchestrator | changed: [localhost] 2026-03-19 03:38:05.666701 | orchestrator | 2026-03-19 03:38:05.666706 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-19 03:38:05.666710 | orchestrator | Thursday 19 March 2026 03:37:46 +0000 (0:00:05.702) 0:00:21.210 ******** 2026-03-19 03:38:05.666718 | orchestrator | changed: [localhost] 2026-03-19 03:38:05.666723 | orchestrator | 2026-03-19 03:38:05.666729 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-19 03:38:05.666734 | orchestrator | Thursday 19 March 2026 03:37:53 +0000 (0:00:06.604) 0:00:27.815 ******** 2026-03-19 03:38:05.666739 | orchestrator | changed: [localhost] 2026-03-19 03:38:05.666743 | orchestrator | 2026-03-19 03:38:05.666748 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-19 03:38:05.666753 | orchestrator | Thursday 19 March 2026 03:37:57 +0000 (0:00:04.428) 0:00:32.243 ******** 2026-03-19 03:38:05.666758 | orchestrator | changed: [localhost] 2026-03-19 03:38:05.666763 | orchestrator | 2026-03-19 03:38:05.666768 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-19 03:38:05.666781 | orchestrator | Thursday 19 March 2026 03:38:01 +0000 (0:00:03.933) 0:00:36.177 ******** 2026-03-19 03:38:05.666786 | orchestrator | ok: [localhost] 2026-03-19 03:38:05.666791 | orchestrator | 2026-03-19 03:38:05.666796 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:38:05.666801 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 03:38:05.666807 | orchestrator | 2026-03-19 03:38:05.666812 | orchestrator | 2026-03-19 03:38:05.666816 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:38:05.666821 | orchestrator | Thursday 19 March 2026 03:38:05 +0000 (0:00:03.698) 0:00:39.876 ******** 2026-03-19 03:38:05.666826 | orchestrator | =============================================================================== 2026-03-19 03:38:05.666831 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.17s 2026-03-19 03:38:05.666836 | orchestrator | Set public network to default ------------------------------------------- 6.60s 2026-03-19 03:38:05.666841 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.41s 2026-03-19 03:38:05.666846 | orchestrator | Create public network --------------------------------------------------- 5.70s 2026-03-19 03:38:05.666867 | orchestrator | Create public subnet ---------------------------------------------------- 4.43s 2026-03-19 03:38:05.666872 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.93s 2026-03-19 03:38:05.666878 | orchestrator | Create manager role ----------------------------------------------------- 3.70s 2026-03-19 03:38:05.666883 | orchestrator | Gathering Facts --------------------------------------------------------- 1.85s 2026-03-19 03:38:08.205528 | orchestrator | 2026-03-19 03:38:08 | INFO  | It takes a moment until task d90814c7-87d9-4510-bae4-3dc00b4d1527 (image-manager) has been started and output is visible here. 2026-03-19 03:38:51.251961 | orchestrator | 2026-03-19 03:38:10 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-19 03:38:51.252043 | orchestrator | 2026-03-19 03:38:11 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-19 03:38:51.252051 | orchestrator | 2026-03-19 03:38:11 | INFO  | Importing image Cirros 0.6.2 2026-03-19 03:38:51.252056 | orchestrator | 2026-03-19 03:38:11 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-19 03:38:51.252061 | orchestrator | 2026-03-19 03:38:13 | INFO  | Waiting for image to leave queued state... 2026-03-19 03:38:51.252067 | orchestrator | 2026-03-19 03:38:15 | INFO  | Waiting for import to complete... 2026-03-19 03:38:51.252071 | orchestrator | 2026-03-19 03:38:25 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-19 03:38:51.252075 | orchestrator | 2026-03-19 03:38:26 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-19 03:38:51.252079 | orchestrator | 2026-03-19 03:38:26 | INFO  | Setting internal_version = 0.6.2 2026-03-19 03:38:51.252083 | orchestrator | 2026-03-19 03:38:26 | INFO  | Setting image_original_user = cirros 2026-03-19 03:38:51.252088 | orchestrator | 2026-03-19 03:38:26 | INFO  | Adding tag os:cirros 2026-03-19 03:38:51.252092 | orchestrator | 2026-03-19 03:38:26 | INFO  | Setting property architecture: x86_64 2026-03-19 03:38:51.252095 | orchestrator | 2026-03-19 03:38:26 | INFO  | Setting property hw_disk_bus: scsi 2026-03-19 03:38:51.252099 | orchestrator | 2026-03-19 03:38:27 | INFO  | Setting property hw_rng_model: virtio 2026-03-19 03:38:51.252103 | orchestrator | 2026-03-19 03:38:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-19 03:38:51.252107 | orchestrator | 2026-03-19 03:38:27 | INFO  | Setting property hw_watchdog_action: reset 2026-03-19 03:38:51.252111 | orchestrator | 2026-03-19 03:38:27 | INFO  | Setting property hypervisor_type: qemu 2026-03-19 03:38:51.252115 | orchestrator | 2026-03-19 03:38:28 | INFO  | Setting property os_distro: cirros 2026-03-19 03:38:51.252119 | orchestrator | 2026-03-19 03:38:28 | INFO  | Setting property os_purpose: minimal 2026-03-19 03:38:51.252122 | orchestrator | 2026-03-19 03:38:28 | INFO  | Setting property replace_frequency: never 2026-03-19 03:38:51.252126 | orchestrator | 2026-03-19 03:38:28 | INFO  | Setting property uuid_validity: none 2026-03-19 03:38:51.252130 | orchestrator | 2026-03-19 03:38:29 | INFO  | Setting property provided_until: none 2026-03-19 03:38:51.252134 | orchestrator | 2026-03-19 03:38:29 | INFO  | Setting property image_description: Cirros 2026-03-19 03:38:51.252137 | orchestrator | 2026-03-19 03:38:29 | INFO  | Setting property image_name: Cirros 2026-03-19 03:38:51.252141 | orchestrator | 2026-03-19 03:38:29 | INFO  | Setting property internal_version: 0.6.2 2026-03-19 03:38:51.252145 | orchestrator | 2026-03-19 03:38:30 | INFO  | Setting property image_original_user: cirros 2026-03-19 03:38:51.252164 | orchestrator | 2026-03-19 03:38:30 | INFO  | Setting property os_version: 0.6.2 2026-03-19 03:38:51.252174 | orchestrator | 2026-03-19 03:38:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-19 03:38:51.252178 | orchestrator | 2026-03-19 03:38:31 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-19 03:38:51.252182 | orchestrator | 2026-03-19 03:38:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-19 03:38:51.252186 | orchestrator | 2026-03-19 03:38:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-19 03:38:51.252189 | orchestrator | 2026-03-19 03:38:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-19 03:38:51.252193 | orchestrator | 2026-03-19 03:38:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-19 03:38:51.252200 | orchestrator | 2026-03-19 03:38:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-19 03:38:51.252204 | orchestrator | 2026-03-19 03:38:31 | INFO  | Importing image Cirros 0.6.3 2026-03-19 03:38:51.252208 | orchestrator | 2026-03-19 03:38:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-19 03:38:51.252212 | orchestrator | 2026-03-19 03:38:32 | INFO  | Waiting for image to leave queued state... 2026-03-19 03:38:51.252216 | orchestrator | 2026-03-19 03:38:34 | INFO  | Waiting for import to complete... 2026-03-19 03:38:51.252232 | orchestrator | 2026-03-19 03:38:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-19 03:38:51.252238 | orchestrator | 2026-03-19 03:38:45 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-19 03:38:51.252247 | orchestrator | 2026-03-19 03:38:45 | INFO  | Setting internal_version = 0.6.3 2026-03-19 03:38:51.252254 | orchestrator | 2026-03-19 03:38:45 | INFO  | Setting image_original_user = cirros 2026-03-19 03:38:51.252260 | orchestrator | 2026-03-19 03:38:45 | INFO  | Adding tag os:cirros 2026-03-19 03:38:51.252266 | orchestrator | 2026-03-19 03:38:45 | INFO  | Setting property architecture: x86_64 2026-03-19 03:38:51.252272 | orchestrator | 2026-03-19 03:38:45 | INFO  | Setting property hw_disk_bus: scsi 2026-03-19 03:38:51.252278 | orchestrator | 2026-03-19 03:38:45 | INFO  | Setting property hw_rng_model: virtio 2026-03-19 03:38:51.252284 | orchestrator | 2026-03-19 03:38:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-19 03:38:51.252290 | orchestrator | 2026-03-19 03:38:46 | INFO  | Setting property hw_watchdog_action: reset 2026-03-19 03:38:51.252296 | orchestrator | 2026-03-19 03:38:46 | INFO  | Setting property hypervisor_type: qemu 2026-03-19 03:38:51.252302 | orchestrator | 2026-03-19 03:38:47 | INFO  | Setting property os_distro: cirros 2026-03-19 03:38:51.252307 | orchestrator | 2026-03-19 03:38:47 | INFO  | Setting property os_purpose: minimal 2026-03-19 03:38:51.252313 | orchestrator | 2026-03-19 03:38:47 | INFO  | Setting property replace_frequency: never 2026-03-19 03:38:51.252319 | orchestrator | 2026-03-19 03:38:47 | INFO  | Setting property uuid_validity: none 2026-03-19 03:38:51.252325 | orchestrator | 2026-03-19 03:38:48 | INFO  | Setting property provided_until: none 2026-03-19 03:38:51.252331 | orchestrator | 2026-03-19 03:38:48 | INFO  | Setting property image_description: Cirros 2026-03-19 03:38:51.252336 | orchestrator | 2026-03-19 03:38:48 | INFO  | Setting property image_name: Cirros 2026-03-19 03:38:51.252342 | orchestrator | 2026-03-19 03:38:49 | INFO  | Setting property internal_version: 0.6.3 2026-03-19 03:38:51.252355 | orchestrator | 2026-03-19 03:38:49 | INFO  | Setting property image_original_user: cirros 2026-03-19 03:38:51.252361 | orchestrator | 2026-03-19 03:38:49 | INFO  | Setting property os_version: 0.6.3 2026-03-19 03:38:51.252367 | orchestrator | 2026-03-19 03:38:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-19 03:38:51.252373 | orchestrator | 2026-03-19 03:38:50 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-19 03:38:51.252379 | orchestrator | 2026-03-19 03:38:50 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-19 03:38:51.252386 | orchestrator | 2026-03-19 03:38:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-19 03:38:51.252392 | orchestrator | 2026-03-19 03:38:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-19 03:38:51.582859 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-19 03:38:53.802394 | orchestrator | 2026-03-19 03:38:53 | INFO  | date: 2026-03-19 2026-03-19 03:38:53.802489 | orchestrator | 2026-03-19 03:38:53 | INFO  | image: octavia-amphora-haproxy-2024.2.20260319.qcow2 2026-03-19 03:38:53.802527 | orchestrator | 2026-03-19 03:38:53 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260319.qcow2 2026-03-19 03:38:53.802543 | orchestrator | 2026-03-19 03:38:53 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260319.qcow2.CHECKSUM 2026-03-19 03:38:53.997104 | orchestrator | 2026-03-19 03:38:53 | INFO  | checksum: 642ccff4a9ff614ac8d16206481b6ae55f81874be4d69c99c5978e507ee9dddf 2026-03-19 03:38:54.071762 | orchestrator | 2026-03-19 03:38:54 | INFO  | It takes a moment until task 9a9ad58e-1340-46cf-a917-bc9303dc9f29 (image-manager) has been started and output is visible here. 2026-03-19 03:39:57.102321 | orchestrator | 2026-03-19 03:38:56 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-19' 2026-03-19 03:39:57.102427 | orchestrator | 2026-03-19 03:38:56 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260319.qcow2: 200 2026-03-19 03:39:57.102439 | orchestrator | 2026-03-19 03:38:56 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-19 2026-03-19 03:39:57.102444 | orchestrator | 2026-03-19 03:38:56 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260319.qcow2 2026-03-19 03:39:57.102449 | orchestrator | 2026-03-19 03:38:58 | INFO  | Waiting for image to leave queued state... 2026-03-19 03:39:57.102454 | orchestrator | 2026-03-19 03:39:00 | INFO  | Waiting for import to complete... 2026-03-19 03:39:57.102458 | orchestrator | 2026-03-19 03:39:10 | INFO  | Waiting for import to complete... 2026-03-19 03:39:57.102462 | orchestrator | 2026-03-19 03:39:20 | INFO  | Waiting for import to complete... 2026-03-19 03:39:57.102466 | orchestrator | 2026-03-19 03:39:30 | INFO  | Waiting for import to complete... 2026-03-19 03:39:57.102472 | orchestrator | 2026-03-19 03:39:40 | INFO  | Waiting for import to complete... 2026-03-19 03:39:57.102476 | orchestrator | 2026-03-19 03:39:50 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-19' successfully completed, reloading images 2026-03-19 03:39:57.102482 | orchestrator | 2026-03-19 03:39:51 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-19' 2026-03-19 03:39:57.102486 | orchestrator | 2026-03-19 03:39:51 | INFO  | Setting internal_version = 2026-03-19 2026-03-19 03:39:57.102508 | orchestrator | 2026-03-19 03:39:51 | INFO  | Setting image_original_user = ubuntu 2026-03-19 03:39:57.102512 | orchestrator | 2026-03-19 03:39:51 | INFO  | Adding tag amphora 2026-03-19 03:39:57.102517 | orchestrator | 2026-03-19 03:39:51 | INFO  | Adding tag os:ubuntu 2026-03-19 03:39:57.102523 | orchestrator | 2026-03-19 03:39:51 | INFO  | Setting property architecture: x86_64 2026-03-19 03:39:57.102529 | orchestrator | 2026-03-19 03:39:51 | INFO  | Setting property hw_disk_bus: scsi 2026-03-19 03:39:57.102538 | orchestrator | 2026-03-19 03:39:52 | INFO  | Setting property hw_rng_model: virtio 2026-03-19 03:39:57.102544 | orchestrator | 2026-03-19 03:39:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-19 03:39:57.102550 | orchestrator | 2026-03-19 03:39:52 | INFO  | Setting property hw_watchdog_action: reset 2026-03-19 03:39:57.102556 | orchestrator | 2026-03-19 03:39:52 | INFO  | Setting property hypervisor_type: qemu 2026-03-19 03:39:57.102562 | orchestrator | 2026-03-19 03:39:53 | INFO  | Setting property os_distro: ubuntu 2026-03-19 03:39:57.102617 | orchestrator | 2026-03-19 03:39:53 | INFO  | Setting property replace_frequency: quarterly 2026-03-19 03:39:57.102624 | orchestrator | 2026-03-19 03:39:54 | INFO  | Setting property uuid_validity: last-1 2026-03-19 03:39:57.102631 | orchestrator | 2026-03-19 03:39:54 | INFO  | Setting property provided_until: none 2026-03-19 03:39:57.102638 | orchestrator | 2026-03-19 03:39:54 | INFO  | Setting property os_purpose: network 2026-03-19 03:39:57.102643 | orchestrator | 2026-03-19 03:39:54 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-19 03:39:57.102659 | orchestrator | 2026-03-19 03:39:55 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-19 03:39:57.102663 | orchestrator | 2026-03-19 03:39:55 | INFO  | Setting property internal_version: 2026-03-19 2026-03-19 03:39:57.102667 | orchestrator | 2026-03-19 03:39:55 | INFO  | Setting property image_original_user: ubuntu 2026-03-19 03:39:57.102670 | orchestrator | 2026-03-19 03:39:55 | INFO  | Setting property os_version: 2026-03-19 2026-03-19 03:39:57.102675 | orchestrator | 2026-03-19 03:39:56 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260319.qcow2 2026-03-19 03:39:57.102679 | orchestrator | 2026-03-19 03:39:56 | INFO  | Setting property image_build_date: 2026-03-19 2026-03-19 03:39:57.102682 | orchestrator | 2026-03-19 03:39:56 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-19' 2026-03-19 03:39:57.102686 | orchestrator | 2026-03-19 03:39:56 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-19' 2026-03-19 03:39:57.102690 | orchestrator | 2026-03-19 03:39:56 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-19 03:39:57.102706 | orchestrator | 2026-03-19 03:39:56 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-19 03:39:57.102712 | orchestrator | 2026-03-19 03:39:56 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-19 03:39:57.102715 | orchestrator | 2026-03-19 03:39:56 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-19 03:39:57.888222 | orchestrator | ok: Runtime: 0:02:59.875493 2026-03-19 03:39:57.908683 | 2026-03-19 03:39:57.908828 | TASK [Run checks] 2026-03-19 03:39:58.638612 | orchestrator | + set -e 2026-03-19 03:39:58.638806 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:39:58.638821 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:39:58.638833 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:39:58.638841 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:39:58.638848 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:39:58.638857 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-19 03:39:58.639791 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:39:58.646201 | orchestrator | 2026-03-19 03:39:58.646278 | orchestrator | # CHECK 2026-03-19 03:39:58.646284 | orchestrator | 2026-03-19 03:39:58.646288 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:39:58.646297 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:39:58.646301 | orchestrator | + echo 2026-03-19 03:39:58.646306 | orchestrator | + echo '# CHECK' 2026-03-19 03:39:58.646310 | orchestrator | + echo 2026-03-19 03:39:58.646317 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-19 03:39:58.647090 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-19 03:39:58.709678 | orchestrator | 2026-03-19 03:39:58.709745 | orchestrator | ## Containers @ testbed-manager 2026-03-19 03:39:58.709752 | orchestrator | 2026-03-19 03:39:58.709758 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-19 03:39:58.709763 | orchestrator | + echo 2026-03-19 03:39:58.709767 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-19 03:39:58.709772 | orchestrator | + echo 2026-03-19 03:39:58.709776 | orchestrator | + osism container testbed-manager ps 2026-03-19 03:40:00.802493 | orchestrator | 2026-03-19 03:40:00 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-19 03:40:01.201493 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-19 03:40:01.201770 | orchestrator | cf5574a03487 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_blackbox_exporter 2026-03-19 03:40:01.201814 | orchestrator | eb832e0d549b registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-19 03:40:01.201834 | orchestrator | fc6e6b17402f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-19 03:40:01.201853 | orchestrator | bd99b02feabd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-19 03:40:01.201870 | orchestrator | 15c7e7c20eb1 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-19 03:40:01.201890 | orchestrator | 9ae608cc2a9b registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 57 minutes cephclient 2026-03-19 03:40:01.201901 | orchestrator | 8af462aee8ea registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-19 03:40:01.201911 | orchestrator | b538edab9885 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-19 03:40:01.201948 | orchestrator | 7aa3d4dbd9ca registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-19 03:40:01.202208 | orchestrator | abba8daeb9f2 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-19 03:40:01.202227 | orchestrator | e9f7c7bf5cf4 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-19 03:40:01.202238 | orchestrator | b7cc076af5e7 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-19 03:40:01.202248 | orchestrator | 0103d95f5f9f registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-19 03:40:01.202260 | orchestrator | 2e75fca56a02 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-19 03:40:01.202278 | orchestrator | 33916e633b35 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-19 03:40:01.202341 | orchestrator | ab945ec55dc1 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-19 03:40:01.202361 | orchestrator | fda842285e6a registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-19 03:40:01.202378 | orchestrator | 2b03f42e2f90 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-19 03:40:01.202394 | orchestrator | 2b2cd09d88d8 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-19 03:40:01.202410 | orchestrator | d94c779924e2 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-19 03:40:01.202427 | orchestrator | d360ca330fcf registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-19 03:40:01.202444 | orchestrator | 7ed39c16a2ca registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-19 03:40:01.202477 | orchestrator | e8871ffb88c7 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-19 03:40:01.202496 | orchestrator | cb5b2e724b41 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-19 03:40:01.202514 | orchestrator | ca728a50a790 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-19 03:40:01.202549 | orchestrator | a17b352df2a8 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-19 03:40:01.202665 | orchestrator | 4d1b8af8bfab registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-19 03:40:01.202694 | orchestrator | 7d608147876a registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-19 03:40:01.202710 | orchestrator | 90d068aeff89 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-19 03:40:01.202736 | orchestrator | 9346fc59aea3 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-19 03:40:01.557652 | orchestrator | 2026-03-19 03:40:01.557764 | orchestrator | ## Images @ testbed-manager 2026-03-19 03:40:01.557780 | orchestrator | 2026-03-19 03:40:01.557787 | orchestrator | + echo 2026-03-19 03:40:01.557795 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-19 03:40:01.557802 | orchestrator | + echo 2026-03-19 03:40:01.557812 | orchestrator | + osism container testbed-manager images 2026-03-19 03:40:03.952090 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-19 03:40:03.952255 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 57b71a5f6bcd 24 hours ago 239MB 2026-03-19 03:40:03.952268 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 7 weeks ago 41.4MB 2026-03-19 03:40:03.952275 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-19 03:40:03.952282 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-19 03:40:03.952288 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-19 03:40:03.952294 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-19 03:40:03.952300 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-19 03:40:03.952309 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-19 03:40:03.952316 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-19 03:40:03.952348 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-19 03:40:03.952355 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-19 03:40:03.952363 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-19 03:40:03.952370 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-19 03:40:03.952377 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-19 03:40:03.952383 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-19 03:40:03.952390 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-19 03:40:03.952396 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-19 03:40:03.952403 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-19 03:40:03.952411 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-19 03:40:03.952417 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-19 03:40:03.952424 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-19 03:40:03.952430 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-03-19 03:40:03.952437 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-19 03:40:03.952444 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-19 03:40:03.952451 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-19 03:40:04.305846 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-19 03:40:04.306749 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-19 03:40:04.366987 | orchestrator | 2026-03-19 03:40:04.367060 | orchestrator | ## Containers @ testbed-node-0 2026-03-19 03:40:04.367068 | orchestrator | 2026-03-19 03:40:04.367073 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-19 03:40:04.367078 | orchestrator | + echo 2026-03-19 03:40:04.367083 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-19 03:40:04.367088 | orchestrator | + echo 2026-03-19 03:40:04.367093 | orchestrator | + osism container testbed-node-0 ps 2026-03-19 03:40:06.959048 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-19 03:40:06.959101 | orchestrator | 7ac1504b9e9c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-19 03:40:06.959118 | orchestrator | 04236f38fef0 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-19 03:40:06.959123 | orchestrator | bab38102890a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-19 03:40:06.959127 | orchestrator | 26019a5fbc99 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-19 03:40:06.959141 | orchestrator | d11c27b82de9 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-19 03:40:06.959145 | orchestrator | 993ada8c2117 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-19 03:40:06.959158 | orchestrator | ba2dfeef7313 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-03-19 03:40:06.959162 | orchestrator | 9fe9c7ff34dd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-19 03:40:06.959166 | orchestrator | 7b2969f6c9d5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-19 03:40:06.959170 | orchestrator | 3a8c73bb639e registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-19 03:40:06.959174 | orchestrator | 2e7f47dd93ee registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-19 03:40:06.959177 | orchestrator | 7584e833c2e2 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-19 03:40:06.959181 | orchestrator | 2852bc2adb08 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-19 03:40:06.959185 | orchestrator | 12726527bf05 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-19 03:40:06.959189 | orchestrator | 3cf5b1f46b9d registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-19 03:40:06.959193 | orchestrator | 7bd91981a68c registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-03-19 03:40:06.959196 | orchestrator | a77d1eb11af3 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-19 03:40:06.959200 | orchestrator | 7a637d1a6a5f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-19 03:40:06.959204 | orchestrator | 45e59f88159c registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-03-19 03:40:06.959218 | orchestrator | fb8740f90b8a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-19 03:40:06.959222 | orchestrator | 330e27cf7013 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-19 03:40:06.959226 | orchestrator | 28b27ad06e51 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-19 03:40:06.959232 | orchestrator | 84df14cfb204 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-19 03:40:06.959236 | orchestrator | cc426907c7f7 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-03-19 03:40:06.959240 | orchestrator | 9bbd1cbe64c1 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-03-19 03:40:06.959246 | orchestrator | d6a34ac0e6ab registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-19 03:40:06.959250 | orchestrator | 91632474f6c9 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-19 03:40:06.959253 | orchestrator | 655bf51f2eb9 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-19 03:40:06.959257 | orchestrator | 84cc8a5ae1f5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-19 03:40:06.959261 | orchestrator | 9e56d14dc9d6 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-19 03:40:06.959265 | orchestrator | a3b124ec3cd1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-03-19 03:40:06.959269 | orchestrator | 5f1376938661 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-03-19 03:40:06.959273 | orchestrator | f0a83540bde1 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-19 03:40:06.959277 | orchestrator | 9c3de9e2c821 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-03-19 03:40:06.959280 | orchestrator | b68e539c2a84 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-19 03:40:06.959284 | orchestrator | 358673ea9cf1 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-03-19 03:40:06.959288 | orchestrator | 45601d7e15b4 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-19 03:40:06.959292 | orchestrator | 850d82f5c2ea registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-03-19 03:40:06.959296 | orchestrator | d122860efd04 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-03-19 03:40:06.959311 | orchestrator | 7dbfa606339e registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-19 03:40:06.959318 | orchestrator | 0b9a10b82b1f registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-19 03:40:06.959322 | orchestrator | 9d2129adb174 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-19 03:40:06.959338 | orchestrator | 5cabc2f01756 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-03-19 03:40:06.959342 | orchestrator | 8e41aa202e7f registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-19 03:40:06.959346 | orchestrator | 5908a06e35f9 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-19 03:40:06.959350 | orchestrator | caffdd7ad683 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-03-19 03:40:06.959354 | orchestrator | 08c3a0c4de2e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-03-19 03:40:06.959358 | orchestrator | 06909b25a5d5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-19 03:40:06.959362 | orchestrator | 086ce128d548 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-03-19 03:40:06.959366 | orchestrator | 73cf912c361a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-03-19 03:40:06.959369 | orchestrator | 2a2819137b01 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-19 03:40:06.959373 | orchestrator | e6aaaabd2759 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-19 03:40:06.959377 | orchestrator | 6f8162fdfda3 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-19 03:40:06.959381 | orchestrator | 7958783f601e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-19 03:40:06.959385 | orchestrator | cf886a13607c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-19 03:40:06.959388 | orchestrator | f5115c10262e registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-19 03:40:06.959395 | orchestrator | 4be461e99a7a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-19 03:40:06.959399 | orchestrator | 3b2def673aa3 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-19 03:40:06.959410 | orchestrator | fada7250c983 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-19 03:40:06.959416 | orchestrator | e5c326da51e7 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-19 03:40:06.959420 | orchestrator | 5662e567b280 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-19 03:40:06.959424 | orchestrator | 279ede705443 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-19 03:40:06.959428 | orchestrator | 8a8806b8d108 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-19 03:40:06.959432 | orchestrator | 5a8d37bebe36 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-19 03:40:06.959435 | orchestrator | 25a1215196f5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-19 03:40:06.959439 | orchestrator | 52505519445f registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-19 03:40:06.959443 | orchestrator | 0dcb09706665 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-19 03:40:06.959447 | orchestrator | 6396d52f685a registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-19 03:40:06.959451 | orchestrator | f7abac99b393 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-19 03:40:06.959455 | orchestrator | 9c3d2141186f registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-19 03:40:06.959458 | orchestrator | b0bc52594c4d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-19 03:40:07.322558 | orchestrator | 2026-03-19 03:40:07.322635 | orchestrator | ## Images @ testbed-node-0 2026-03-19 03:40:07.322644 | orchestrator | 2026-03-19 03:40:07.322650 | orchestrator | + echo 2026-03-19 03:40:07.322656 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-19 03:40:07.322661 | orchestrator | + echo 2026-03-19 03:40:07.322667 | orchestrator | + osism container testbed-node-0 images 2026-03-19 03:40:09.761199 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-19 03:40:09.761320 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-19 03:40:09.761337 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-19 03:40:09.761348 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-19 03:40:09.761359 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-19 03:40:09.761382 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-19 03:40:09.761388 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-19 03:40:09.761407 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-19 03:40:09.761414 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-19 03:40:09.761428 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-19 03:40:09.761434 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-19 03:40:09.761439 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-19 03:40:09.761445 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-19 03:40:09.761451 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-19 03:40:09.761456 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-19 03:40:09.761462 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-19 03:40:09.761468 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-19 03:40:09.761473 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-19 03:40:09.761479 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-19 03:40:09.761485 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-19 03:40:09.761490 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-19 03:40:09.761496 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-19 03:40:09.761501 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-19 03:40:09.761507 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-19 03:40:09.761513 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-19 03:40:09.761566 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-19 03:40:09.761619 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-19 03:40:09.761625 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-19 03:40:09.761635 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-19 03:40:09.761641 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-19 03:40:09.761647 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-19 03:40:09.761658 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-19 03:40:09.761680 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-19 03:40:09.761687 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-19 03:40:09.761693 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-19 03:40:09.761702 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-19 03:40:09.761711 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-19 03:40:09.761726 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-19 03:40:09.761737 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-19 03:40:09.761746 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-19 03:40:09.761755 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-19 03:40:09.761764 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-19 03:40:09.761773 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-19 03:40:09.761783 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-19 03:40:09.761793 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-19 03:40:09.761802 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-19 03:40:09.761812 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-19 03:40:09.761819 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-19 03:40:09.761824 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-19 03:40:09.761830 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-19 03:40:09.761836 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-19 03:40:09.761841 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-19 03:40:09.761847 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-19 03:40:09.761852 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-19 03:40:09.761858 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-19 03:40:09.761863 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-19 03:40:09.761869 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-19 03:40:09.761881 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-19 03:40:09.761886 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-19 03:40:09.761896 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-19 03:40:09.761902 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-19 03:40:09.761907 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-19 03:40:09.761913 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-19 03:40:09.761919 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-19 03:40:09.761930 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-19 03:40:09.761936 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-19 03:40:09.761942 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-19 03:40:09.761947 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-19 03:40:09.761953 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-19 03:40:09.761959 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-19 03:40:10.107333 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-19 03:40:10.108058 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-19 03:40:10.158074 | orchestrator | 2026-03-19 03:40:10.158166 | orchestrator | ## Containers @ testbed-node-1 2026-03-19 03:40:10.158178 | orchestrator | 2026-03-19 03:40:10.158183 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-19 03:40:10.158187 | orchestrator | + echo 2026-03-19 03:40:10.158191 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-19 03:40:10.158197 | orchestrator | + echo 2026-03-19 03:40:10.158201 | orchestrator | + osism container testbed-node-1 ps 2026-03-19 03:40:12.609915 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-19 03:40:12.609997 | orchestrator | a52a8ecdf6f2 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-19 03:40:12.610007 | orchestrator | e48e0e1e6d82 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-19 03:40:12.610080 | orchestrator | 8b88ad1fecdf registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-19 03:40:12.610088 | orchestrator | 7e4fbf0ae961 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-19 03:40:12.610096 | orchestrator | 533abb1cd085 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-19 03:40:12.610102 | orchestrator | 3376ca630d5a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-19 03:40:12.610126 | orchestrator | b9b80907d183 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-03-19 03:40:12.610132 | orchestrator | 5f0549c03a2c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-19 03:40:12.610138 | orchestrator | 6accb3d511fb registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-19 03:40:12.610144 | orchestrator | 9a8a6bb19cdd registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-19 03:40:12.610149 | orchestrator | cef8c77acd97 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-19 03:40:12.610155 | orchestrator | 7927521ecff5 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-19 03:40:12.610173 | orchestrator | 849a66c3bb8f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-19 03:40:12.610179 | orchestrator | cb295c1311d7 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-19 03:40:12.610184 | orchestrator | caa3b1ccc580 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-19 03:40:12.610190 | orchestrator | ebfd15dde1ad registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_api 2026-03-19 03:40:12.610195 | orchestrator | c3121754615c registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-19 03:40:12.610201 | orchestrator | 04d479264d1c registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-19 03:40:12.610206 | orchestrator | aad2f53b615b registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-19 03:40:12.610224 | orchestrator | 00f88fab45cf registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-19 03:40:12.610230 | orchestrator | 09638b91f91f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-19 03:40:12.610236 | orchestrator | e04d1817f3d8 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-19 03:40:12.610615 | orchestrator | eebde4e77bba registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-19 03:40:12.610641 | orchestrator | 48e37513c9f8 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-03-19 03:40:12.610648 | orchestrator | 3ca9e9112bd2 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-03-19 03:40:12.610654 | orchestrator | d35e09d333d8 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-19 03:40:12.610660 | orchestrator | 5bf0c1519d04 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-19 03:40:12.610667 | orchestrator | b566a82e3bb2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-19 03:40:12.610673 | orchestrator | ef6fc07028a5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-19 03:40:12.610679 | orchestrator | 4d70cd81897c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-19 03:40:12.610685 | orchestrator | c5c84fb0c2c7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-03-19 03:40:12.610692 | orchestrator | 2da95809859b registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-03-19 03:40:12.610698 | orchestrator | c8ffd90aed21 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-19 03:40:12.610705 | orchestrator | 5dfb3e46c37e registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-03-19 03:40:12.610711 | orchestrator | b161d06297a5 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-19 03:40:12.610717 | orchestrator | 4f65a011fedd registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-03-19 03:40:12.610724 | orchestrator | 9a13c689a287 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-19 03:40:12.610736 | orchestrator | 10838ecbaf88 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-03-19 03:40:12.610745 | orchestrator | 47fbc171d441 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-03-19 03:40:12.610754 | orchestrator | b59c95a6622a registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-19 03:40:12.610762 | orchestrator | e550b5ef3e9d registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-19 03:40:12.610775 | orchestrator | f01498b9ee9b registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-19 03:40:12.610791 | orchestrator | b934e789cde9 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-03-19 03:40:12.610799 | orchestrator | a8a5d5698095 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-19 03:40:12.610807 | orchestrator | 400edc37601a registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-19 03:40:12.610825 | orchestrator | 21312be076ba registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-03-19 03:40:12.610835 | orchestrator | 36342ba193d3 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-03-19 03:40:12.610843 | orchestrator | 59014c32719e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-19 03:40:12.610853 | orchestrator | 4afea7a44b48 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-03-19 03:40:12.610861 | orchestrator | 2f8623e04b21 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-03-19 03:40:12.610871 | orchestrator | 00b426694c35 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-19 03:40:12.610878 | orchestrator | 7d1c29d08d66 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-19 03:40:12.610884 | orchestrator | 842f938f6edb registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-19 03:40:12.610889 | orchestrator | 57ae4e785087 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-19 03:40:12.610894 | orchestrator | 1ee7be4a1ffb registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-19 03:40:12.610900 | orchestrator | d65049708f8a registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-19 03:40:12.610905 | orchestrator | eab4a209ddf1 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-19 03:40:12.610910 | orchestrator | 3daf7a12fe88 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-19 03:40:12.610916 | orchestrator | 877829633fc7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-19 03:40:12.610926 | orchestrator | 673493865e47 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-19 03:40:12.610931 | orchestrator | 5db581d470bd registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-19 03:40:12.610937 | orchestrator | db5b6161fec3 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-19 03:40:12.610942 | orchestrator | c11b322a92d5 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-19 03:40:12.610952 | orchestrator | 17c6c171fc20 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-19 03:40:12.610958 | orchestrator | 0c5e65f60b1e registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-19 03:40:12.610967 | orchestrator | 852bfe6b1585 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-19 03:40:12.610972 | orchestrator | 91567a099690 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-19 03:40:12.610978 | orchestrator | 5525461ceb01 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-19 03:40:12.610984 | orchestrator | fea95229cdc3 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-19 03:40:12.610992 | orchestrator | 16e622d6defb registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-19 03:40:12.610998 | orchestrator | 6225751c9da1 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-19 03:40:12.947548 | orchestrator | 2026-03-19 03:40:12.947688 | orchestrator | ## Images @ testbed-node-1 2026-03-19 03:40:12.947704 | orchestrator | 2026-03-19 03:40:12.947714 | orchestrator | + echo 2026-03-19 03:40:12.947725 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-19 03:40:12.947735 | orchestrator | + echo 2026-03-19 03:40:12.947746 | orchestrator | + osism container testbed-node-1 images 2026-03-19 03:40:15.395106 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-19 03:40:15.395191 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-19 03:40:15.395198 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-19 03:40:15.395203 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-19 03:40:15.395208 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-19 03:40:15.395212 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-19 03:40:15.395230 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-19 03:40:15.395233 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-19 03:40:15.395237 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-19 03:40:15.395241 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-19 03:40:15.395245 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-19 03:40:15.395248 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-19 03:40:15.395252 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-19 03:40:15.395256 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-19 03:40:15.395259 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-19 03:40:15.395263 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-19 03:40:15.395267 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-19 03:40:15.395271 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-19 03:40:15.395274 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-19 03:40:15.395278 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-19 03:40:15.395282 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-19 03:40:15.395286 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-19 03:40:15.395289 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-19 03:40:15.395293 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-19 03:40:15.395297 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-19 03:40:15.395300 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-19 03:40:15.395304 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-19 03:40:15.395308 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-19 03:40:15.395312 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-19 03:40:15.395316 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-19 03:40:15.395320 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-19 03:40:15.395324 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-19 03:40:15.395346 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-19 03:40:15.395357 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-19 03:40:15.395361 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-19 03:40:15.395365 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-19 03:40:15.395369 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-19 03:40:15.395372 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-19 03:40:15.395376 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-19 03:40:15.395380 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-19 03:40:15.395397 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-19 03:40:15.395401 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-19 03:40:15.395405 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-19 03:40:15.395408 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-19 03:40:15.395412 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-19 03:40:15.395417 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-19 03:40:15.395423 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-19 03:40:15.395431 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-19 03:40:15.395439 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-19 03:40:15.395445 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-19 03:40:15.395451 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-19 03:40:15.395457 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-19 03:40:15.395463 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-19 03:40:15.395469 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-19 03:40:15.395475 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-19 03:40:15.395481 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-19 03:40:15.395487 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-19 03:40:15.395493 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-19 03:40:15.395499 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-19 03:40:15.395511 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-19 03:40:15.395521 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-19 03:40:15.395528 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-19 03:40:15.395534 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-19 03:40:15.395540 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-19 03:40:15.395551 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-19 03:40:15.395566 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-19 03:40:15.395611 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-19 03:40:15.395618 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-19 03:40:15.395624 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-19 03:40:15.395631 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-19 03:40:15.730904 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-19 03:40:15.731127 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-19 03:40:15.792619 | orchestrator | 2026-03-19 03:40:15.792693 | orchestrator | ## Containers @ testbed-node-2 2026-03-19 03:40:15.792703 | orchestrator | 2026-03-19 03:40:15.792709 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-19 03:40:15.792715 | orchestrator | + echo 2026-03-19 03:40:15.792721 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-19 03:40:15.792728 | orchestrator | + echo 2026-03-19 03:40:15.792733 | orchestrator | + osism container testbed-node-2 ps 2026-03-19 03:40:18.255401 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-19 03:40:18.255478 | orchestrator | 33aabaee13ec registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-19 03:40:18.255487 | orchestrator | 3d53e0c46f9b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-19 03:40:18.255495 | orchestrator | 677155d5de40 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-19 03:40:18.255501 | orchestrator | c9c1f8f20fc1 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-19 03:40:18.255513 | orchestrator | 8246b37e89fb registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-19 03:40:18.255522 | orchestrator | bde21a47120e registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-19 03:40:18.255530 | orchestrator | f00beb091ec4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-19 03:40:18.255559 | orchestrator | cddc7cc20d3c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-19 03:40:18.255567 | orchestrator | c8ba408c1d2a registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-19 03:40:18.255611 | orchestrator | 49a33a1471b7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-19 03:40:18.255617 | orchestrator | 7e92183ecee6 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-19 03:40:18.255622 | orchestrator | 99a1a1b18c16 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-19 03:40:18.255626 | orchestrator | 31fd89790b99 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-19 03:40:18.255631 | orchestrator | cf2c5e468981 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-19 03:40:18.255658 | orchestrator | e16ed25823b7 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-19 03:40:18.255665 | orchestrator | 65fb40e2d254 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-19 03:40:18.255673 | orchestrator | 2184bf43e65b registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-19 03:40:18.255679 | orchestrator | 5b6523973b2b registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-19 03:40:18.255686 | orchestrator | aa02dc619d39 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-19 03:40:18.255708 | orchestrator | 5cfd7d6719a5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-19 03:40:18.255715 | orchestrator | 855000d2a71e registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-19 03:40:18.255730 | orchestrator | 698ecf3181cd registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-19 03:40:18.255745 | orchestrator | 332bd419aa03 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-19 03:40:18.255752 | orchestrator | 7dc0a7ea78d4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-03-19 03:40:18.255758 | orchestrator | f6e558458058 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-03-19 03:40:18.255772 | orchestrator | 74bfeb2c493a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-19 03:40:18.255779 | orchestrator | b79aae05e465 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-19 03:40:18.255785 | orchestrator | 524c55615464 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-19 03:40:18.255789 | orchestrator | ebce7574a1a7 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-19 03:40:18.255794 | orchestrator | 6353fc3e887a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-19 03:40:18.255798 | orchestrator | e8a094d191e6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-03-19 03:40:18.255803 | orchestrator | 51a7f7437473 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-03-19 03:40:18.255813 | orchestrator | 1f113d8272e7 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-19 03:40:18.255817 | orchestrator | 4107ed23dff0 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-03-19 03:40:18.255822 | orchestrator | 326f3cf98546 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-19 03:40:18.255826 | orchestrator | 225789e80fa3 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_api 2026-03-19 03:40:18.255830 | orchestrator | 2ee99ee9e421 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-19 03:40:18.255835 | orchestrator | 8ed1503930b2 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-03-19 03:40:18.255839 | orchestrator | c2369504746f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-03-19 03:40:18.255849 | orchestrator | 5ff1c05f8376 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-19 03:40:18.255854 | orchestrator | 5723b599d459 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-19 03:40:18.255858 | orchestrator | e629cf1f7600 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-19 03:40:18.255867 | orchestrator | d009eeaf9565 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 42 minutes (healthy) nova_api 2026-03-19 03:40:18.255871 | orchestrator | 0c959f6b385e registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-19 03:40:18.255876 | orchestrator | 48329e87767f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-19 03:40:18.255880 | orchestrator | 0a0137485884 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-03-19 03:40:18.255884 | orchestrator | 1d90d480c527 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-03-19 03:40:18.255890 | orchestrator | 265f27c11318 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-19 03:40:18.255896 | orchestrator | aa93fdada15b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-03-19 03:40:18.255902 | orchestrator | 67e819771899 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-03-19 03:40:18.255909 | orchestrator | bda069fd22f0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-19 03:40:18.255914 | orchestrator | 115813b5cae5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-19 03:40:18.255920 | orchestrator | 8ad380560b9a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-19 03:40:18.255926 | orchestrator | a51a7551e931 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-19 03:40:18.255936 | orchestrator | 79506e2fecc2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-19 03:40:18.255943 | orchestrator | adace44584e3 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-19 03:40:18.255949 | orchestrator | 761669ef54dd registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-19 03:40:18.255955 | orchestrator | a49e24850504 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-19 03:40:18.255961 | orchestrator | 94ca735dfd3f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-19 03:40:18.255972 | orchestrator | 62a540d2150d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-19 03:40:18.255984 | orchestrator | e5846a93ac40 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-19 03:40:18.255990 | orchestrator | 6b142e710c7d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-19 03:40:18.255997 | orchestrator | 7e64773bf113 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-19 03:40:18.256002 | orchestrator | 6443bd5d3cac registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-19 03:40:18.256009 | orchestrator | 608fa9eb31f7 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-19 03:40:18.256015 | orchestrator | 3cba0172973c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-19 03:40:18.256022 | orchestrator | e78c1150b487 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-19 03:40:18.256029 | orchestrator | 39ed892690e2 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-19 03:40:18.256036 | orchestrator | 76b8688dd4ee registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-19 03:40:18.256042 | orchestrator | 125018eba4f6 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-19 03:40:18.256048 | orchestrator | d3bc9978423d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-19 03:40:18.604676 | orchestrator | 2026-03-19 03:40:18.604762 | orchestrator | ## Images @ testbed-node-2 2026-03-19 03:40:18.604772 | orchestrator | 2026-03-19 03:40:18.604779 | orchestrator | + echo 2026-03-19 03:40:18.604786 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-19 03:40:18.604793 | orchestrator | + echo 2026-03-19 03:40:18.604799 | orchestrator | + osism container testbed-node-2 images 2026-03-19 03:40:21.081839 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-19 03:40:21.081929 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-19 03:40:21.081951 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-19 03:40:21.081960 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-19 03:40:21.081967 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-19 03:40:21.081973 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-19 03:40:21.081991 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-19 03:40:21.081997 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-19 03:40:21.082066 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-19 03:40:21.082074 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-19 03:40:21.082080 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-19 03:40:21.082090 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-19 03:40:21.082097 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-19 03:40:21.082104 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-19 03:40:21.082110 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-19 03:40:21.082116 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-19 03:40:21.082122 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-19 03:40:21.082129 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-19 03:40:21.082135 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-19 03:40:21.082141 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-19 03:40:21.082147 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-19 03:40:21.082158 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-19 03:40:21.082168 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-19 03:40:21.082178 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-19 03:40:21.082194 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-19 03:40:21.082204 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-19 03:40:21.082215 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-19 03:40:21.082224 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-19 03:40:21.082234 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-19 03:40:21.082245 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-19 03:40:21.082254 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-19 03:40:21.082265 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-19 03:40:21.082294 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-19 03:40:21.082306 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-19 03:40:21.082317 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-19 03:40:21.082338 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-19 03:40:21.082348 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-19 03:40:21.082359 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-19 03:40:21.082370 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-19 03:40:21.082381 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-19 03:40:21.082389 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-19 03:40:21.082397 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-19 03:40:21.082404 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-19 03:40:21.082412 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-19 03:40:21.082427 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-19 03:40:21.082434 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-19 03:40:21.082441 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-19 03:40:21.082449 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-19 03:40:21.082456 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-19 03:40:21.082464 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-19 03:40:21.082471 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-19 03:40:21.082478 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-19 03:40:21.082485 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-19 03:40:21.082492 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-19 03:40:21.082500 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-19 03:40:21.082507 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-19 03:40:21.082514 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-19 03:40:21.082521 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-19 03:40:21.082529 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-19 03:40:21.082536 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-19 03:40:21.082543 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-19 03:40:21.082557 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-19 03:40:21.082565 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-19 03:40:21.082595 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-19 03:40:21.082613 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-19 03:40:21.082621 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-19 03:40:21.082629 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-19 03:40:21.082640 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-19 03:40:21.082646 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-19 03:40:21.082652 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-19 03:40:21.477468 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-19 03:40:21.483377 | orchestrator | + set -e 2026-03-19 03:40:21.483449 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 03:40:21.483456 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 03:40:21.483460 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 03:40:21.483464 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 03:40:21.483468 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 03:40:21.483473 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 03:40:21.483478 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 03:40:21.483482 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:40:21.483486 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:40:21.483490 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 03:40:21.483494 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 03:40:21.483498 | orchestrator | ++ export ARA=false 2026-03-19 03:40:21.483502 | orchestrator | ++ ARA=false 2026-03-19 03:40:21.483506 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 03:40:21.483510 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 03:40:21.483514 | orchestrator | ++ export TEMPEST=false 2026-03-19 03:40:21.483518 | orchestrator | ++ TEMPEST=false 2026-03-19 03:40:21.483522 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 03:40:21.483525 | orchestrator | ++ IS_ZUUL=true 2026-03-19 03:40:21.483529 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:40:21.483533 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:40:21.483537 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 03:40:21.483541 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 03:40:21.483544 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 03:40:21.483548 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 03:40:21.483553 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 03:40:21.483557 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 03:40:21.483560 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 03:40:21.483564 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 03:40:21.483568 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 03:40:21.483572 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-19 03:40:21.491037 | orchestrator | + set -e 2026-03-19 03:40:21.491117 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:40:21.491123 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:40:21.491129 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:40:21.491133 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:40:21.491137 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:40:21.491142 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-19 03:40:21.491402 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:40:21.496254 | orchestrator | 2026-03-19 03:40:21.496329 | orchestrator | # Ceph status 2026-03-19 03:40:21.496339 | orchestrator | 2026-03-19 03:40:21.496373 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:40:21.496382 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:40:21.496389 | orchestrator | + echo 2026-03-19 03:40:21.496395 | orchestrator | + echo '# Ceph status' 2026-03-19 03:40:21.496402 | orchestrator | + echo 2026-03-19 03:40:21.496408 | orchestrator | + ceph -s 2026-03-19 03:40:22.113676 | orchestrator | cluster: 2026-03-19 03:40:22.113759 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-19 03:40:22.113766 | orchestrator | health: HEALTH_OK 2026-03-19 03:40:22.113772 | orchestrator | 2026-03-19 03:40:22.113777 | orchestrator | services: 2026-03-19 03:40:22.113783 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 68m) 2026-03-19 03:40:22.113789 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-03-19 03:40:22.113795 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-19 03:40:22.113801 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-03-19 03:40:22.113806 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-19 03:40:22.113810 | orchestrator | 2026-03-19 03:40:22.113815 | orchestrator | data: 2026-03-19 03:40:22.113820 | orchestrator | volumes: 1/1 healthy 2026-03-19 03:40:22.113825 | orchestrator | pools: 14 pools, 417 pgs 2026-03-19 03:40:22.113829 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-19 03:40:22.113834 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-03-19 03:40:22.113839 | orchestrator | pgs: 417 active+clean 2026-03-19 03:40:22.113843 | orchestrator | 2026-03-19 03:40:22.167205 | orchestrator | 2026-03-19 03:40:22.167276 | orchestrator | # Ceph versions 2026-03-19 03:40:22.167283 | orchestrator | 2026-03-19 03:40:22.167287 | orchestrator | + echo 2026-03-19 03:40:22.167292 | orchestrator | + echo '# Ceph versions' 2026-03-19 03:40:22.167297 | orchestrator | + echo 2026-03-19 03:40:22.167301 | orchestrator | + ceph versions 2026-03-19 03:40:22.726000 | orchestrator | { 2026-03-19 03:40:22.726143 | orchestrator | "mon": { 2026-03-19 03:40:22.726155 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-19 03:40:22.726164 | orchestrator | }, 2026-03-19 03:40:22.726171 | orchestrator | "mgr": { 2026-03-19 03:40:22.726178 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-19 03:40:22.726184 | orchestrator | }, 2026-03-19 03:40:22.726191 | orchestrator | "osd": { 2026-03-19 03:40:22.726198 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-19 03:40:22.726204 | orchestrator | }, 2026-03-19 03:40:22.726211 | orchestrator | "mds": { 2026-03-19 03:40:22.726217 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-19 03:40:22.726223 | orchestrator | }, 2026-03-19 03:40:22.726230 | orchestrator | "rgw": { 2026-03-19 03:40:22.726236 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-19 03:40:22.726242 | orchestrator | }, 2026-03-19 03:40:22.726248 | orchestrator | "overall": { 2026-03-19 03:40:22.726256 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-19 03:40:22.726263 | orchestrator | } 2026-03-19 03:40:22.726269 | orchestrator | } 2026-03-19 03:40:22.771211 | orchestrator | 2026-03-19 03:40:22.771276 | orchestrator | # Ceph OSD tree 2026-03-19 03:40:22.771282 | orchestrator | 2026-03-19 03:40:22.771287 | orchestrator | + echo 2026-03-19 03:40:22.771292 | orchestrator | + echo '# Ceph OSD tree' 2026-03-19 03:40:22.771297 | orchestrator | + echo 2026-03-19 03:40:22.771301 | orchestrator | + ceph osd df tree 2026-03-19 03:40:23.295655 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-19 03:40:23.295763 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 385 MiB 113 GiB 5.88 1.00 - root default 2026-03-19 03:40:23.295776 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-03-19 03:40:23.295785 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.06 1.03 193 up osd.0 2026-03-19 03:40:23.295792 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.67 0.96 211 up osd.4 2026-03-19 03:40:23.295800 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-03-19 03:40:23.295834 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.92 1.18 223 up osd.1 2026-03-19 03:40:23.295842 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 984 MiB 923 MiB 1 KiB 62 MiB 19 GiB 4.81 0.82 183 up osd.5 2026-03-19 03:40:23.295850 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-03-19 03:40:23.295860 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.12 1.04 199 up osd.2 2026-03-19 03:40:23.295868 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.69 0.97 209 up osd.3 2026-03-19 03:40:23.295876 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 385 MiB 113 GiB 5.88 2026-03-19 03:40:23.295885 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.63 2026-03-19 03:40:23.338157 | orchestrator | 2026-03-19 03:40:23.338241 | orchestrator | # Ceph monitor status 2026-03-19 03:40:23.338253 | orchestrator | 2026-03-19 03:40:23.338261 | orchestrator | + echo 2026-03-19 03:40:23.338270 | orchestrator | + echo '# Ceph monitor status' 2026-03-19 03:40:23.338278 | orchestrator | + echo 2026-03-19 03:40:23.338286 | orchestrator | + ceph mon stat 2026-03-19 03:40:23.948919 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-19 03:40:24.003447 | orchestrator | 2026-03-19 03:40:24.003703 | orchestrator | # Ceph quorum status 2026-03-19 03:40:24.003743 | orchestrator | 2026-03-19 03:40:24.003765 | orchestrator | + echo 2026-03-19 03:40:24.003786 | orchestrator | + echo '# Ceph quorum status' 2026-03-19 03:40:24.003805 | orchestrator | + echo 2026-03-19 03:40:24.004756 | orchestrator | + ceph quorum_status 2026-03-19 03:40:24.004809 | orchestrator | + jq 2026-03-19 03:40:24.620876 | orchestrator | { 2026-03-19 03:40:24.620974 | orchestrator | "election_epoch": 8, 2026-03-19 03:40:24.620985 | orchestrator | "quorum": [ 2026-03-19 03:40:24.620992 | orchestrator | 0, 2026-03-19 03:40:24.620999 | orchestrator | 1, 2026-03-19 03:40:24.621004 | orchestrator | 2 2026-03-19 03:40:24.621010 | orchestrator | ], 2026-03-19 03:40:24.621017 | orchestrator | "quorum_names": [ 2026-03-19 03:40:24.621023 | orchestrator | "testbed-node-0", 2026-03-19 03:40:24.621029 | orchestrator | "testbed-node-1", 2026-03-19 03:40:24.621035 | orchestrator | "testbed-node-2" 2026-03-19 03:40:24.621040 | orchestrator | ], 2026-03-19 03:40:24.621047 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-19 03:40:24.621054 | orchestrator | "quorum_age": 4129, 2026-03-19 03:40:24.621059 | orchestrator | "features": { 2026-03-19 03:40:24.621065 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-19 03:40:24.621071 | orchestrator | "quorum_mon": [ 2026-03-19 03:40:24.621077 | orchestrator | "kraken", 2026-03-19 03:40:24.621083 | orchestrator | "luminous", 2026-03-19 03:40:24.621089 | orchestrator | "mimic", 2026-03-19 03:40:24.621095 | orchestrator | "osdmap-prune", 2026-03-19 03:40:24.621101 | orchestrator | "nautilus", 2026-03-19 03:40:24.621106 | orchestrator | "octopus", 2026-03-19 03:40:24.621112 | orchestrator | "pacific", 2026-03-19 03:40:24.621118 | orchestrator | "elector-pinging", 2026-03-19 03:40:24.621124 | orchestrator | "quincy", 2026-03-19 03:40:24.621130 | orchestrator | "reef" 2026-03-19 03:40:24.621136 | orchestrator | ] 2026-03-19 03:40:24.621142 | orchestrator | }, 2026-03-19 03:40:24.621148 | orchestrator | "monmap": { 2026-03-19 03:40:24.621153 | orchestrator | "epoch": 1, 2026-03-19 03:40:24.621160 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-19 03:40:24.621167 | orchestrator | "modified": "2026-03-19T02:31:17.645692Z", 2026-03-19 03:40:24.621173 | orchestrator | "created": "2026-03-19T02:31:17.645692Z", 2026-03-19 03:40:24.621179 | orchestrator | "min_mon_release": 18, 2026-03-19 03:40:24.621185 | orchestrator | "min_mon_release_name": "reef", 2026-03-19 03:40:24.621190 | orchestrator | "election_strategy": 1, 2026-03-19 03:40:24.621196 | orchestrator | "disallowed_leaders: ": "", 2026-03-19 03:40:24.621202 | orchestrator | "stretch_mode": false, 2026-03-19 03:40:24.621208 | orchestrator | "tiebreaker_mon": "", 2026-03-19 03:40:24.621234 | orchestrator | "removed_ranks: ": "", 2026-03-19 03:40:24.621240 | orchestrator | "features": { 2026-03-19 03:40:24.621246 | orchestrator | "persistent": [ 2026-03-19 03:40:24.621252 | orchestrator | "kraken", 2026-03-19 03:40:24.621257 | orchestrator | "luminous", 2026-03-19 03:40:24.621263 | orchestrator | "mimic", 2026-03-19 03:40:24.621269 | orchestrator | "osdmap-prune", 2026-03-19 03:40:24.621274 | orchestrator | "nautilus", 2026-03-19 03:40:24.621280 | orchestrator | "octopus", 2026-03-19 03:40:24.621286 | orchestrator | "pacific", 2026-03-19 03:40:24.621292 | orchestrator | "elector-pinging", 2026-03-19 03:40:24.621299 | orchestrator | "quincy", 2026-03-19 03:40:24.621310 | orchestrator | "reef" 2026-03-19 03:40:24.621319 | orchestrator | ], 2026-03-19 03:40:24.621328 | orchestrator | "optional": [] 2026-03-19 03:40:24.621337 | orchestrator | }, 2026-03-19 03:40:24.621346 | orchestrator | "mons": [ 2026-03-19 03:40:24.621355 | orchestrator | { 2026-03-19 03:40:24.621364 | orchestrator | "rank": 0, 2026-03-19 03:40:24.621374 | orchestrator | "name": "testbed-node-0", 2026-03-19 03:40:24.621383 | orchestrator | "public_addrs": { 2026-03-19 03:40:24.621393 | orchestrator | "addrvec": [ 2026-03-19 03:40:24.621401 | orchestrator | { 2026-03-19 03:40:24.621408 | orchestrator | "type": "v2", 2026-03-19 03:40:24.621414 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-19 03:40:24.621421 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621427 | orchestrator | }, 2026-03-19 03:40:24.621434 | orchestrator | { 2026-03-19 03:40:24.621440 | orchestrator | "type": "v1", 2026-03-19 03:40:24.621447 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-19 03:40:24.621453 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621460 | orchestrator | } 2026-03-19 03:40:24.621466 | orchestrator | ] 2026-03-19 03:40:24.621473 | orchestrator | }, 2026-03-19 03:40:24.621479 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-19 03:40:24.621486 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-19 03:40:24.621492 | orchestrator | "priority": 0, 2026-03-19 03:40:24.621499 | orchestrator | "weight": 0, 2026-03-19 03:40:24.621505 | orchestrator | "crush_location": "{}" 2026-03-19 03:40:24.621512 | orchestrator | }, 2026-03-19 03:40:24.621518 | orchestrator | { 2026-03-19 03:40:24.621525 | orchestrator | "rank": 1, 2026-03-19 03:40:24.621531 | orchestrator | "name": "testbed-node-1", 2026-03-19 03:40:24.621538 | orchestrator | "public_addrs": { 2026-03-19 03:40:24.621545 | orchestrator | "addrvec": [ 2026-03-19 03:40:24.621551 | orchestrator | { 2026-03-19 03:40:24.621558 | orchestrator | "type": "v2", 2026-03-19 03:40:24.621628 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-19 03:40:24.621642 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621652 | orchestrator | }, 2026-03-19 03:40:24.621661 | orchestrator | { 2026-03-19 03:40:24.621670 | orchestrator | "type": "v1", 2026-03-19 03:40:24.621679 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-19 03:40:24.621689 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621698 | orchestrator | } 2026-03-19 03:40:24.621707 | orchestrator | ] 2026-03-19 03:40:24.621716 | orchestrator | }, 2026-03-19 03:40:24.621726 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-19 03:40:24.621735 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-19 03:40:24.621746 | orchestrator | "priority": 0, 2026-03-19 03:40:24.621756 | orchestrator | "weight": 0, 2026-03-19 03:40:24.621766 | orchestrator | "crush_location": "{}" 2026-03-19 03:40:24.621777 | orchestrator | }, 2026-03-19 03:40:24.621786 | orchestrator | { 2026-03-19 03:40:24.621797 | orchestrator | "rank": 2, 2026-03-19 03:40:24.621804 | orchestrator | "name": "testbed-node-2", 2026-03-19 03:40:24.621810 | orchestrator | "public_addrs": { 2026-03-19 03:40:24.621816 | orchestrator | "addrvec": [ 2026-03-19 03:40:24.621822 | orchestrator | { 2026-03-19 03:40:24.621828 | orchestrator | "type": "v2", 2026-03-19 03:40:24.621834 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-19 03:40:24.621840 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621846 | orchestrator | }, 2026-03-19 03:40:24.621851 | orchestrator | { 2026-03-19 03:40:24.621857 | orchestrator | "type": "v1", 2026-03-19 03:40:24.621863 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-19 03:40:24.621869 | orchestrator | "nonce": 0 2026-03-19 03:40:24.621875 | orchestrator | } 2026-03-19 03:40:24.621880 | orchestrator | ] 2026-03-19 03:40:24.621894 | orchestrator | }, 2026-03-19 03:40:24.621900 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-19 03:40:24.621907 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-19 03:40:24.621913 | orchestrator | "priority": 0, 2026-03-19 03:40:24.621920 | orchestrator | "weight": 0, 2026-03-19 03:40:24.621931 | orchestrator | "crush_location": "{}" 2026-03-19 03:40:24.621941 | orchestrator | } 2026-03-19 03:40:24.621950 | orchestrator | ] 2026-03-19 03:40:24.621958 | orchestrator | } 2026-03-19 03:40:24.621967 | orchestrator | } 2026-03-19 03:40:24.621990 | orchestrator | 2026-03-19 03:40:24.622001 | orchestrator | # Ceph free space status 2026-03-19 03:40:24.622011 | orchestrator | 2026-03-19 03:40:24.622134 | orchestrator | + echo 2026-03-19 03:40:24.622151 | orchestrator | + echo '# Ceph free space status' 2026-03-19 03:40:24.622161 | orchestrator | + echo 2026-03-19 03:40:24.622171 | orchestrator | + ceph df 2026-03-19 03:40:25.215502 | orchestrator | --- RAW STORAGE --- 2026-03-19 03:40:25.215638 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-19 03:40:25.215665 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-03-19 03:40:25.215673 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-03-19 03:40:25.215680 | orchestrator | 2026-03-19 03:40:25.215689 | orchestrator | --- POOLS --- 2026-03-19 03:40:25.215696 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-19 03:40:25.215704 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-19 03:40:25.215711 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-19 03:40:25.215717 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-19 03:40:25.215724 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-19 03:40:25.215730 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-19 03:40:25.215737 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-19 03:40:25.215744 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-19 03:40:25.215751 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-19 03:40:25.215758 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-19 03:40:25.215764 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-19 03:40:25.215771 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-19 03:40:25.215778 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-03-19 03:40:25.215784 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-19 03:40:25.215791 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-19 03:40:25.261860 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-19 03:40:25.308756 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-19 03:40:25.308846 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-19 03:40:25.308856 | orchestrator | + osism apply facts 2026-03-19 03:40:34.881839 | orchestrator | 2026-03-19 03:40:34 | INFO  | Task 16bf807c-5127-4514-a690-9c68596445b8 (facts) was prepared for execution. 2026-03-19 03:40:34.881972 | orchestrator | 2026-03-19 03:40:34 | INFO  | It takes a moment until task 16bf807c-5127-4514-a690-9c68596445b8 (facts) has been started and output is visible here. 2026-03-19 03:40:49.493159 | orchestrator | 2026-03-19 03:40:49.493305 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 03:40:49.493320 | orchestrator | 2026-03-19 03:40:49.493327 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 03:40:49.493335 | orchestrator | Thursday 19 March 2026 03:40:39 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-03-19 03:40:49.493341 | orchestrator | ok: [testbed-manager] 2026-03-19 03:40:49.493349 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:40:49.493356 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:40:49.493362 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:40:49.493368 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:40:49.493374 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:40:49.493416 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:40:49.493431 | orchestrator | 2026-03-19 03:40:49.493440 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 03:40:49.493449 | orchestrator | Thursday 19 March 2026 03:40:40 +0000 (0:00:01.143) 0:00:01.414 ******** 2026-03-19 03:40:49.493460 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:40:49.493470 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:40:49.493480 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:40:49.493488 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:40:49.493497 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:40:49.493505 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:40:49.493515 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:40:49.493525 | orchestrator | 2026-03-19 03:40:49.493535 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 03:40:49.493546 | orchestrator | 2026-03-19 03:40:49.493556 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 03:40:49.493567 | orchestrator | Thursday 19 March 2026 03:40:41 +0000 (0:00:01.366) 0:00:02.780 ******** 2026-03-19 03:40:49.493578 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:40:49.493672 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:40:49.493679 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:40:49.493685 | orchestrator | ok: [testbed-manager] 2026-03-19 03:40:49.493692 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:40:49.493699 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:40:49.493706 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:40:49.493716 | orchestrator | 2026-03-19 03:40:49.493730 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 03:40:49.493745 | orchestrator | 2026-03-19 03:40:49.493755 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 03:40:49.493766 | orchestrator | Thursday 19 March 2026 03:40:48 +0000 (0:00:06.513) 0:00:09.294 ******** 2026-03-19 03:40:49.493776 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:40:49.493788 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:40:49.493799 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:40:49.493811 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:40:49.493821 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:40:49.493832 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:40:49.493842 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:40:49.493849 | orchestrator | 2026-03-19 03:40:49.493856 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:40:49.493864 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493887 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493894 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493902 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493908 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493915 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493922 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:40:49.493929 | orchestrator | 2026-03-19 03:40:49.493936 | orchestrator | 2026-03-19 03:40:49.493943 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:40:49.493950 | orchestrator | Thursday 19 March 2026 03:40:48 +0000 (0:00:00.575) 0:00:09.870 ******** 2026-03-19 03:40:49.493966 | orchestrator | =============================================================================== 2026-03-19 03:40:49.493973 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.51s 2026-03-19 03:40:49.493980 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-03-19 03:40:49.493987 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-19 03:40:49.493994 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-03-19 03:40:49.877806 | orchestrator | + osism validate ceph-mons 2026-03-19 03:41:22.818069 | orchestrator | 2026-03-19 03:41:22.818186 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-19 03:41:22.818203 | orchestrator | 2026-03-19 03:41:22.818216 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-19 03:41:22.818228 | orchestrator | Thursday 19 March 2026 03:41:06 +0000 (0:00:00.447) 0:00:00.447 ******** 2026-03-19 03:41:22.818240 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.818252 | orchestrator | 2026-03-19 03:41:22.818263 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-19 03:41:22.818274 | orchestrator | Thursday 19 March 2026 03:41:07 +0000 (0:00:00.857) 0:00:01.304 ******** 2026-03-19 03:41:22.818285 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.818296 | orchestrator | 2026-03-19 03:41:22.818307 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-19 03:41:22.818321 | orchestrator | Thursday 19 March 2026 03:41:08 +0000 (0:00:01.000) 0:00:02.305 ******** 2026-03-19 03:41:22.818340 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.818360 | orchestrator | 2026-03-19 03:41:22.818379 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-19 03:41:22.818396 | orchestrator | Thursday 19 March 2026 03:41:08 +0000 (0:00:00.134) 0:00:02.440 ******** 2026-03-19 03:41:22.818413 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.818429 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:22.818445 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:22.818462 | orchestrator | 2026-03-19 03:41:22.818481 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-19 03:41:22.818499 | orchestrator | Thursday 19 March 2026 03:41:09 +0000 (0:00:00.297) 0:00:02.737 ******** 2026-03-19 03:41:22.818519 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:22.818541 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:22.818561 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.818579 | orchestrator | 2026-03-19 03:41:22.818632 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-19 03:41:22.818652 | orchestrator | Thursday 19 March 2026 03:41:10 +0000 (0:00:01.100) 0:00:03.838 ******** 2026-03-19 03:41:22.818671 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.818692 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:41:22.818711 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:41:22.818730 | orchestrator | 2026-03-19 03:41:22.818743 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-19 03:41:22.818757 | orchestrator | Thursday 19 March 2026 03:41:10 +0000 (0:00:00.322) 0:00:04.160 ******** 2026-03-19 03:41:22.818769 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.818782 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:22.818795 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:22.818808 | orchestrator | 2026-03-19 03:41:22.818820 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:41:22.818833 | orchestrator | Thursday 19 March 2026 03:41:10 +0000 (0:00:00.501) 0:00:04.661 ******** 2026-03-19 03:41:22.818846 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.818858 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:22.818871 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:22.818884 | orchestrator | 2026-03-19 03:41:22.818897 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-19 03:41:22.818934 | orchestrator | Thursday 19 March 2026 03:41:11 +0000 (0:00:00.297) 0:00:04.958 ******** 2026-03-19 03:41:22.818946 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.818957 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:41:22.818968 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:41:22.818979 | orchestrator | 2026-03-19 03:41:22.818990 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-19 03:41:22.819001 | orchestrator | Thursday 19 March 2026 03:41:11 +0000 (0:00:00.291) 0:00:05.250 ******** 2026-03-19 03:41:22.819012 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819023 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:22.819034 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:22.819045 | orchestrator | 2026-03-19 03:41:22.819056 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:41:22.819067 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.512) 0:00:05.763 ******** 2026-03-19 03:41:22.819079 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819089 | orchestrator | 2026-03-19 03:41:22.819101 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:41:22.819112 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.274) 0:00:06.037 ******** 2026-03-19 03:41:22.819123 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819134 | orchestrator | 2026-03-19 03:41:22.819145 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:41:22.819155 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.255) 0:00:06.292 ******** 2026-03-19 03:41:22.819166 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819177 | orchestrator | 2026-03-19 03:41:22.819188 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:22.819199 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.243) 0:00:06.536 ******** 2026-03-19 03:41:22.819209 | orchestrator | 2026-03-19 03:41:22.819220 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:22.819231 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.071) 0:00:06.608 ******** 2026-03-19 03:41:22.819242 | orchestrator | 2026-03-19 03:41:22.819253 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:22.819263 | orchestrator | Thursday 19 March 2026 03:41:12 +0000 (0:00:00.071) 0:00:06.679 ******** 2026-03-19 03:41:22.819274 | orchestrator | 2026-03-19 03:41:22.819285 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:41:22.819296 | orchestrator | Thursday 19 March 2026 03:41:13 +0000 (0:00:00.076) 0:00:06.756 ******** 2026-03-19 03:41:22.819307 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819318 | orchestrator | 2026-03-19 03:41:22.819328 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-19 03:41:22.819340 | orchestrator | Thursday 19 March 2026 03:41:13 +0000 (0:00:00.253) 0:00:07.010 ******** 2026-03-19 03:41:22.819351 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819361 | orchestrator | 2026-03-19 03:41:22.819396 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-19 03:41:22.819407 | orchestrator | Thursday 19 March 2026 03:41:13 +0000 (0:00:00.234) 0:00:07.244 ******** 2026-03-19 03:41:22.819418 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819429 | orchestrator | 2026-03-19 03:41:22.819440 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-19 03:41:22.819451 | orchestrator | Thursday 19 March 2026 03:41:13 +0000 (0:00:00.133) 0:00:07.378 ******** 2026-03-19 03:41:22.819462 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:41:22.819478 | orchestrator | 2026-03-19 03:41:22.819489 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-19 03:41:22.819500 | orchestrator | Thursday 19 March 2026 03:41:15 +0000 (0:00:01.862) 0:00:09.240 ******** 2026-03-19 03:41:22.819517 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819535 | orchestrator | 2026-03-19 03:41:22.819552 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-19 03:41:22.819581 | orchestrator | Thursday 19 March 2026 03:41:16 +0000 (0:00:00.510) 0:00:09.750 ******** 2026-03-19 03:41:22.819653 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819670 | orchestrator | 2026-03-19 03:41:22.819687 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-19 03:41:22.819704 | orchestrator | Thursday 19 March 2026 03:41:16 +0000 (0:00:00.158) 0:00:09.909 ******** 2026-03-19 03:41:22.819722 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819738 | orchestrator | 2026-03-19 03:41:22.819753 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-19 03:41:22.819770 | orchestrator | Thursday 19 March 2026 03:41:16 +0000 (0:00:00.331) 0:00:10.240 ******** 2026-03-19 03:41:22.819786 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819804 | orchestrator | 2026-03-19 03:41:22.819821 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-19 03:41:22.819837 | orchestrator | Thursday 19 March 2026 03:41:16 +0000 (0:00:00.305) 0:00:10.545 ******** 2026-03-19 03:41:22.819855 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.819874 | orchestrator | 2026-03-19 03:41:22.819892 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-19 03:41:22.819911 | orchestrator | Thursday 19 March 2026 03:41:16 +0000 (0:00:00.131) 0:00:10.677 ******** 2026-03-19 03:41:22.819930 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.819949 | orchestrator | 2026-03-19 03:41:22.819967 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-19 03:41:22.819985 | orchestrator | Thursday 19 March 2026 03:41:17 +0000 (0:00:00.134) 0:00:10.812 ******** 2026-03-19 03:41:22.820005 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.820023 | orchestrator | 2026-03-19 03:41:22.820042 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-19 03:41:22.820054 | orchestrator | Thursday 19 March 2026 03:41:17 +0000 (0:00:00.105) 0:00:10.917 ******** 2026-03-19 03:41:22.820064 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:41:22.820075 | orchestrator | 2026-03-19 03:41:22.820086 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-19 03:41:22.820097 | orchestrator | Thursday 19 March 2026 03:41:18 +0000 (0:00:01.460) 0:00:12.378 ******** 2026-03-19 03:41:22.820113 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.820130 | orchestrator | 2026-03-19 03:41:22.820156 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-19 03:41:22.820176 | orchestrator | Thursday 19 March 2026 03:41:18 +0000 (0:00:00.303) 0:00:12.681 ******** 2026-03-19 03:41:22.820191 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.820207 | orchestrator | 2026-03-19 03:41:22.820224 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-19 03:41:22.820240 | orchestrator | Thursday 19 March 2026 03:41:19 +0000 (0:00:00.143) 0:00:12.824 ******** 2026-03-19 03:41:22.820257 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:22.820273 | orchestrator | 2026-03-19 03:41:22.820291 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-19 03:41:22.820308 | orchestrator | Thursday 19 March 2026 03:41:19 +0000 (0:00:00.142) 0:00:12.967 ******** 2026-03-19 03:41:22.820339 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.820355 | orchestrator | 2026-03-19 03:41:22.820371 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-19 03:41:22.820387 | orchestrator | Thursday 19 March 2026 03:41:19 +0000 (0:00:00.130) 0:00:13.097 ******** 2026-03-19 03:41:22.820404 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.820420 | orchestrator | 2026-03-19 03:41:22.820437 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-19 03:41:22.820454 | orchestrator | Thursday 19 March 2026 03:41:19 +0000 (0:00:00.366) 0:00:13.464 ******** 2026-03-19 03:41:22.820472 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.820489 | orchestrator | 2026-03-19 03:41:22.820524 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-19 03:41:22.820543 | orchestrator | Thursday 19 March 2026 03:41:20 +0000 (0:00:00.279) 0:00:13.743 ******** 2026-03-19 03:41:22.820561 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:22.820578 | orchestrator | 2026-03-19 03:41:22.820679 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:41:22.820699 | orchestrator | Thursday 19 March 2026 03:41:20 +0000 (0:00:00.251) 0:00:13.994 ******** 2026-03-19 03:41:22.820717 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.820732 | orchestrator | 2026-03-19 03:41:22.820743 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:41:22.820754 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:01.764) 0:00:15.759 ******** 2026-03-19 03:41:22.820765 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.820776 | orchestrator | 2026-03-19 03:41:22.820787 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:41:22.820797 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:00.269) 0:00:16.029 ******** 2026-03-19 03:41:22.820808 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:22.820819 | orchestrator | 2026-03-19 03:41:22.820849 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:25.592011 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:00.261) 0:00:16.291 ******** 2026-03-19 03:41:25.592109 | orchestrator | 2026-03-19 03:41:25.592121 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:25.592130 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:00.070) 0:00:16.362 ******** 2026-03-19 03:41:25.592138 | orchestrator | 2026-03-19 03:41:25.592147 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:25.592155 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:00.070) 0:00:16.432 ******** 2026-03-19 03:41:25.592162 | orchestrator | 2026-03-19 03:41:25.592170 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-19 03:41:25.592177 | orchestrator | Thursday 19 March 2026 03:41:22 +0000 (0:00:00.074) 0:00:16.507 ******** 2026-03-19 03:41:25.592186 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:25.592193 | orchestrator | 2026-03-19 03:41:25.592200 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:41:25.592208 | orchestrator | Thursday 19 March 2026 03:41:24 +0000 (0:00:01.550) 0:00:18.057 ******** 2026-03-19 03:41:25.592216 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-19 03:41:25.592223 | orchestrator |  "msg": [ 2026-03-19 03:41:25.592233 | orchestrator |  "Validator run completed.", 2026-03-19 03:41:25.592241 | orchestrator |  "You can find the report file here:", 2026-03-19 03:41:25.592250 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-19T03:41:07+00:00-report.json", 2026-03-19 03:41:25.592257 | orchestrator |  "on the following host:", 2026-03-19 03:41:25.592261 | orchestrator |  "testbed-manager" 2026-03-19 03:41:25.592267 | orchestrator |  ] 2026-03-19 03:41:25.592287 | orchestrator | } 2026-03-19 03:41:25.592300 | orchestrator | 2026-03-19 03:41:25.592307 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:41:25.592316 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-19 03:41:25.592325 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:41:25.592333 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:41:25.592340 | orchestrator | 2026-03-19 03:41:25.592346 | orchestrator | 2026-03-19 03:41:25.592381 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:41:25.592390 | orchestrator | Thursday 19 March 2026 03:41:25 +0000 (0:00:00.862) 0:00:18.920 ******** 2026-03-19 03:41:25.592397 | orchestrator | =============================================================================== 2026-03-19 03:41:25.592405 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.86s 2026-03-19 03:41:25.592413 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-03-19 03:41:25.592421 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-03-19 03:41:25.592428 | orchestrator | Gather status data ------------------------------------------------------ 1.46s 2026-03-19 03:41:25.592436 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-03-19 03:41:25.592444 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-03-19 03:41:25.592449 | orchestrator | Print report file information ------------------------------------------- 0.86s 2026-03-19 03:41:25.592454 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-19 03:41:25.592469 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.51s 2026-03-19 03:41:25.592477 | orchestrator | Set quorum test data ---------------------------------------------------- 0.51s 2026-03-19 03:41:25.592485 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2026-03-19 03:41:25.592492 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2026-03-19 03:41:25.592499 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-03-19 03:41:25.592505 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-19 03:41:25.592512 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-03-19 03:41:25.592518 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-03-19 03:41:25.592525 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-19 03:41:25.592534 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-03-19 03:41:25.592541 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-03-19 03:41:25.592549 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-03-19 03:41:25.936217 | orchestrator | + osism validate ceph-mgrs 2026-03-19 03:41:57.576488 | orchestrator | 2026-03-19 03:41:57.576684 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-19 03:41:57.576702 | orchestrator | 2026-03-19 03:41:57.576710 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-19 03:41:57.576718 | orchestrator | Thursday 19 March 2026 03:41:42 +0000 (0:00:00.434) 0:00:00.434 ******** 2026-03-19 03:41:57.576725 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.576732 | orchestrator | 2026-03-19 03:41:57.576739 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-19 03:41:57.576746 | orchestrator | Thursday 19 March 2026 03:41:43 +0000 (0:00:00.848) 0:00:01.282 ******** 2026-03-19 03:41:57.576820 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.576828 | orchestrator | 2026-03-19 03:41:57.576834 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-19 03:41:57.576841 | orchestrator | Thursday 19 March 2026 03:41:44 +0000 (0:00:01.035) 0:00:02.318 ******** 2026-03-19 03:41:57.576848 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.576856 | orchestrator | 2026-03-19 03:41:57.576863 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-19 03:41:57.576870 | orchestrator | Thursday 19 March 2026 03:41:44 +0000 (0:00:00.139) 0:00:02.457 ******** 2026-03-19 03:41:57.576877 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.576884 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:57.576890 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:57.576918 | orchestrator | 2026-03-19 03:41:57.576925 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-19 03:41:57.576932 | orchestrator | Thursday 19 March 2026 03:41:45 +0000 (0:00:00.327) 0:00:02.785 ******** 2026-03-19 03:41:57.576938 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:57.576945 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.576951 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:57.576958 | orchestrator | 2026-03-19 03:41:57.576965 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-19 03:41:57.576971 | orchestrator | Thursday 19 March 2026 03:41:46 +0000 (0:00:01.125) 0:00:03.911 ******** 2026-03-19 03:41:57.576978 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.576985 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:41:57.576991 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:41:57.576998 | orchestrator | 2026-03-19 03:41:57.577005 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-19 03:41:57.577012 | orchestrator | Thursday 19 March 2026 03:41:46 +0000 (0:00:00.285) 0:00:04.196 ******** 2026-03-19 03:41:57.577021 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577029 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:57.577036 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:57.577044 | orchestrator | 2026-03-19 03:41:57.577051 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:41:57.577059 | orchestrator | Thursday 19 March 2026 03:41:46 +0000 (0:00:00.491) 0:00:04.688 ******** 2026-03-19 03:41:57.577067 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577074 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:57.577082 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:57.577089 | orchestrator | 2026-03-19 03:41:57.577097 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-19 03:41:57.577105 | orchestrator | Thursday 19 March 2026 03:41:47 +0000 (0:00:00.309) 0:00:04.998 ******** 2026-03-19 03:41:57.577112 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577120 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:41:57.577128 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:41:57.577135 | orchestrator | 2026-03-19 03:41:57.577143 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-19 03:41:57.577150 | orchestrator | Thursday 19 March 2026 03:41:47 +0000 (0:00:00.304) 0:00:05.303 ******** 2026-03-19 03:41:57.577158 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577165 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:41:57.577172 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:41:57.577180 | orchestrator | 2026-03-19 03:41:57.577188 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:41:57.577195 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.472) 0:00:05.775 ******** 2026-03-19 03:41:57.577203 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577210 | orchestrator | 2026-03-19 03:41:57.577218 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:41:57.577225 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.253) 0:00:06.028 ******** 2026-03-19 03:41:57.577233 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577241 | orchestrator | 2026-03-19 03:41:57.577248 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:41:57.577256 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.258) 0:00:06.287 ******** 2026-03-19 03:41:57.577264 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577272 | orchestrator | 2026-03-19 03:41:57.577280 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.577287 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.254) 0:00:06.542 ******** 2026-03-19 03:41:57.577295 | orchestrator | 2026-03-19 03:41:57.577302 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.577311 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.070) 0:00:06.613 ******** 2026-03-19 03:41:57.577324 | orchestrator | 2026-03-19 03:41:57.577331 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.577339 | orchestrator | Thursday 19 March 2026 03:41:48 +0000 (0:00:00.070) 0:00:06.683 ******** 2026-03-19 03:41:57.577346 | orchestrator | 2026-03-19 03:41:57.577354 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:41:57.577361 | orchestrator | Thursday 19 March 2026 03:41:49 +0000 (0:00:00.075) 0:00:06.759 ******** 2026-03-19 03:41:57.577369 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577376 | orchestrator | 2026-03-19 03:41:57.577384 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-19 03:41:57.577392 | orchestrator | Thursday 19 March 2026 03:41:49 +0000 (0:00:00.249) 0:00:07.009 ******** 2026-03-19 03:41:57.577400 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577408 | orchestrator | 2026-03-19 03:41:57.577438 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-19 03:41:57.577449 | orchestrator | Thursday 19 March 2026 03:41:49 +0000 (0:00:00.259) 0:00:07.268 ******** 2026-03-19 03:41:57.577459 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577471 | orchestrator | 2026-03-19 03:41:57.577480 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-19 03:41:57.577491 | orchestrator | Thursday 19 March 2026 03:41:49 +0000 (0:00:00.134) 0:00:07.403 ******** 2026-03-19 03:41:57.577501 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:41:57.577512 | orchestrator | 2026-03-19 03:41:57.577523 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-19 03:41:57.577533 | orchestrator | Thursday 19 March 2026 03:41:51 +0000 (0:00:02.207) 0:00:09.610 ******** 2026-03-19 03:41:57.577544 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577554 | orchestrator | 2026-03-19 03:41:57.577564 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-19 03:41:57.577575 | orchestrator | Thursday 19 March 2026 03:41:52 +0000 (0:00:00.447) 0:00:10.058 ******** 2026-03-19 03:41:57.577585 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577615 | orchestrator | 2026-03-19 03:41:57.577626 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-19 03:41:57.577637 | orchestrator | Thursday 19 March 2026 03:41:52 +0000 (0:00:00.335) 0:00:10.393 ******** 2026-03-19 03:41:57.577648 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577658 | orchestrator | 2026-03-19 03:41:57.577669 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-19 03:41:57.577682 | orchestrator | Thursday 19 March 2026 03:41:52 +0000 (0:00:00.139) 0:00:10.533 ******** 2026-03-19 03:41:57.577691 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:41:57.577700 | orchestrator | 2026-03-19 03:41:57.577710 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-19 03:41:57.577719 | orchestrator | Thursday 19 March 2026 03:41:52 +0000 (0:00:00.149) 0:00:10.683 ******** 2026-03-19 03:41:57.577729 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.577739 | orchestrator | 2026-03-19 03:41:57.577748 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-19 03:41:57.577759 | orchestrator | Thursday 19 March 2026 03:41:53 +0000 (0:00:00.289) 0:00:10.973 ******** 2026-03-19 03:41:57.577769 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:41:57.577779 | orchestrator | 2026-03-19 03:41:57.577809 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:41:57.577822 | orchestrator | Thursday 19 March 2026 03:41:53 +0000 (0:00:00.261) 0:00:11.234 ******** 2026-03-19 03:41:57.577833 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.577843 | orchestrator | 2026-03-19 03:41:57.577854 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:41:57.577866 | orchestrator | Thursday 19 March 2026 03:41:54 +0000 (0:00:01.261) 0:00:12.496 ******** 2026-03-19 03:41:57.577877 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.577898 | orchestrator | 2026-03-19 03:41:57.577910 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:41:57.577921 | orchestrator | Thursday 19 March 2026 03:41:55 +0000 (0:00:00.260) 0:00:12.757 ******** 2026-03-19 03:41:57.577932 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.577943 | orchestrator | 2026-03-19 03:41:57.577954 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.577965 | orchestrator | Thursday 19 March 2026 03:41:55 +0000 (0:00:00.304) 0:00:13.062 ******** 2026-03-19 03:41:57.577975 | orchestrator | 2026-03-19 03:41:57.577987 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.577998 | orchestrator | Thursday 19 March 2026 03:41:55 +0000 (0:00:00.079) 0:00:13.141 ******** 2026-03-19 03:41:57.578009 | orchestrator | 2026-03-19 03:41:57.578092 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:41:57.578105 | orchestrator | Thursday 19 March 2026 03:41:55 +0000 (0:00:00.071) 0:00:13.212 ******** 2026-03-19 03:41:57.578116 | orchestrator | 2026-03-19 03:41:57.578126 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-19 03:41:57.578137 | orchestrator | Thursday 19 March 2026 03:41:55 +0000 (0:00:00.275) 0:00:13.487 ******** 2026-03-19 03:41:57.578148 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-19 03:41:57.578158 | orchestrator | 2026-03-19 03:41:57.578169 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:41:57.578189 | orchestrator | Thursday 19 March 2026 03:41:57 +0000 (0:00:01.368) 0:00:14.856 ******** 2026-03-19 03:41:57.578200 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-19 03:41:57.578211 | orchestrator |  "msg": [ 2026-03-19 03:41:57.578220 | orchestrator |  "Validator run completed.", 2026-03-19 03:41:57.578227 | orchestrator |  "You can find the report file here:", 2026-03-19 03:41:57.578234 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-19T03:41:43+00:00-report.json", 2026-03-19 03:41:57.578242 | orchestrator |  "on the following host:", 2026-03-19 03:41:57.578249 | orchestrator |  "testbed-manager" 2026-03-19 03:41:57.578256 | orchestrator |  ] 2026-03-19 03:41:57.578263 | orchestrator | } 2026-03-19 03:41:57.578270 | orchestrator | 2026-03-19 03:41:57.578277 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:41:57.578284 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 03:41:57.578310 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:41:57.578330 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:41:57.925904 | orchestrator | 2026-03-19 03:41:57.925991 | orchestrator | 2026-03-19 03:41:57.926002 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:41:57.926012 | orchestrator | Thursday 19 March 2026 03:41:57 +0000 (0:00:00.424) 0:00:15.280 ******** 2026-03-19 03:41:57.926051 | orchestrator | =============================================================================== 2026-03-19 03:41:57.926059 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.21s 2026-03-19 03:41:57.926068 | orchestrator | Write report file ------------------------------------------------------- 1.37s 2026-03-19 03:41:57.926076 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2026-03-19 03:41:57.926083 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2026-03-19 03:41:57.926091 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2026-03-19 03:41:57.926099 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-19 03:41:57.926136 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2026-03-19 03:41:57.926145 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.47s 2026-03-19 03:41:57.926153 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.45s 2026-03-19 03:41:57.926160 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-03-19 03:41:57.926168 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-03-19 03:41:57.926173 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-03-19 03:41:57.926177 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-03-19 03:41:57.926182 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-03-19 03:41:57.926187 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-03-19 03:41:57.926191 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-03-19 03:41:57.926197 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-03-19 03:41:57.926203 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-19 03:41:57.926210 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-03-19 03:41:57.926220 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-03-19 03:41:58.261307 | orchestrator | + osism validate ceph-osds 2026-03-19 03:42:19.674372 | orchestrator | 2026-03-19 03:42:19.674479 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-19 03:42:19.674492 | orchestrator | 2026-03-19 03:42:19.674500 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-19 03:42:19.674508 | orchestrator | Thursday 19 March 2026 03:42:15 +0000 (0:00:00.432) 0:00:00.432 ******** 2026-03-19 03:42:19.674515 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:19.674523 | orchestrator | 2026-03-19 03:42:19.674529 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-19 03:42:19.674536 | orchestrator | Thursday 19 March 2026 03:42:15 +0000 (0:00:00.860) 0:00:01.293 ******** 2026-03-19 03:42:19.674544 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:19.674550 | orchestrator | 2026-03-19 03:42:19.674557 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-19 03:42:19.674563 | orchestrator | Thursday 19 March 2026 03:42:16 +0000 (0:00:00.534) 0:00:01.827 ******** 2026-03-19 03:42:19.674570 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:19.674576 | orchestrator | 2026-03-19 03:42:19.674582 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-19 03:42:19.674589 | orchestrator | Thursday 19 March 2026 03:42:17 +0000 (0:00:00.709) 0:00:02.537 ******** 2026-03-19 03:42:19.674639 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:19.674649 | orchestrator | 2026-03-19 03:42:19.674656 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-19 03:42:19.674663 | orchestrator | Thursday 19 March 2026 03:42:17 +0000 (0:00:00.130) 0:00:02.667 ******** 2026-03-19 03:42:19.674670 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:19.674676 | orchestrator | 2026-03-19 03:42:19.674696 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-19 03:42:19.674704 | orchestrator | Thursday 19 March 2026 03:42:17 +0000 (0:00:00.133) 0:00:02.801 ******** 2026-03-19 03:42:19.674710 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:19.674717 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:19.674723 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:19.674730 | orchestrator | 2026-03-19 03:42:19.674736 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-19 03:42:19.674743 | orchestrator | Thursday 19 March 2026 03:42:17 +0000 (0:00:00.330) 0:00:03.132 ******** 2026-03-19 03:42:19.674770 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:19.674776 | orchestrator | 2026-03-19 03:42:19.674783 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-19 03:42:19.674789 | orchestrator | Thursday 19 March 2026 03:42:17 +0000 (0:00:00.168) 0:00:03.300 ******** 2026-03-19 03:42:19.674797 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:19.674803 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:19.674809 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:19.674815 | orchestrator | 2026-03-19 03:42:19.674821 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-19 03:42:19.674827 | orchestrator | Thursday 19 March 2026 03:42:18 +0000 (0:00:00.326) 0:00:03.627 ******** 2026-03-19 03:42:19.674833 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:19.674839 | orchestrator | 2026-03-19 03:42:19.674845 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:42:19.674851 | orchestrator | Thursday 19 March 2026 03:42:19 +0000 (0:00:00.789) 0:00:04.416 ******** 2026-03-19 03:42:19.674857 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:19.674864 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:19.674871 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:19.674877 | orchestrator | 2026-03-19 03:42:19.674884 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-19 03:42:19.674890 | orchestrator | Thursday 19 March 2026 03:42:19 +0000 (0:00:00.297) 0:00:04.713 ******** 2026-03-19 03:42:19.674901 | orchestrator | skipping: [testbed-node-3] => (item={'id': '290c4da589ac3de6f7f37d322ae251013832a6f02267b866dac2b1a81db00f93', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-19 03:42:19.674911 | orchestrator | skipping: [testbed-node-3] => (item={'id': '120de54573765e138ba405961fa6f24cc067893c3fede8f312d51b0de17f4b42', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.674919 | orchestrator | skipping: [testbed-node-3] => (item={'id': '568ff09b2e9b86c1fcf134617f492009733704f2d511644e6a965dbd8c79ffec', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.674925 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36f5208d4887817633d7ed24d51b951a71a342f96ccdff0dcdaff37ef02237bb', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-19 03:42:19.674932 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e6d8c4e77401b5ebde848ecbd402233bd4ab0790f82e8dc41a041e6d0ab88b2d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.674959 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd93042499993c7d1ec48f8d10be2059aebb058fd4717553c48afbdae22b37cc1', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.674967 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0733c7d30b321b9f5174fc6abd6907c980c738704d834132b5eb94633472810', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-19 03:42:19.674974 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b1820ac62b61da8d47dc2fba80148bc81fde174a19a9ce9e8663dae4739837e7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-03-19 03:42:19.674981 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69f68d3489c658204302943fe334044b9abc37841bc845b3b50193f6340fd041', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.674999 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b7aaf6e04318d7961a2013bbc8849ab0c326f7ff347f5c367acb365bd4b875f4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.675007 | orchestrator | skipping: [testbed-node-3] => (item={'id': '489614a52cde8ea6bbffa030022361661e625f020255567f93904fb90736dc86', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.675016 | orchestrator | ok: [testbed-node-3] => (item={'id': '9c5558db1f0fe20f59bfbe7998b14f65cb8c24145afd762b1125a323f64b1324', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:19.675023 | orchestrator | ok: [testbed-node-3] => (item={'id': '3c47153821a97e0733d30c5c5cd46d207b2e2c858511f58ab6331ba80192fd26', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:19.675030 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e3d510f6c2216d5c6afc3fc0941d89ed51640842c4d376d2c9886825f17da19c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.675038 | orchestrator | skipping: [testbed-node-3] => (item={'id': '687725ce12aa117ba8176ee17d510446f9f4331b07b4af91b6294be9a662daa1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:19.675044 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c72a16c8bf1d5e6b651507be81287d604a2e2cb7c56ba5835390d031484d2b79', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:19.675051 | orchestrator | skipping: [testbed-node-3] => (item={'id': '28e6989b6be7628d3ac9020ffc6f316eef8f5a2c9abaafa2d1e4c927bc4e0573', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.675058 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c6cd1d22ac3a377916d934416e54cbbdd40cb23c05c31972ea1656fa9cd6eb0a', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.675066 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a26000d3ab588c3ee06bc5e40fc47ceceb102cd4aff5104ec231097b437e7146', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.675074 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fe3fbfc0664ab5d85f17442f1ecc4aa08859bf580b2818f75cfa290b123b70f1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-19 03:42:19.675086 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1e4c0a7c5a64dc75616eb9e301cec6d892d0be8f4e8f3764f2f384ec10905cd8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.923951 | orchestrator | skipping: [testbed-node-4] => (item={'id': '241989c27bf93b49e913d1a8dd18ca870fe11f4c6b6e340f40461409384e35d9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.924053 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d6ac3abc5f2aa3d61e3a77f2a4bb2694c2b25a3a6767fd56dbeb35ea187c1d1', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-19 03:42:19.924063 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f9a86488137c54924de63c19c0024f390b7e2857602b9b7db4e2a4f46eba51cc', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.924072 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4e778a7aad3daae070a4e363a6beea1d47218d054503c4aa32d86d5e4a4380f9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.924079 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33795a16d95ab93167e4152084a055a978d87fd1af091d3d9902245c6426beb7', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-19 03:42:19.924120 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dbb92f34d6fd28e0722ff31f78670e08e80bff17c90ea5720f154fe1fa35effe', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-03-19 03:42:19.924128 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e72f3e11996c8cf7c74b6e3a0a97dcbade398eb82177a7a5e53f835d584f9935', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924135 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e89e18e2b16ec9027dc58260d5d9788d60b3751553274e7d26e114bab934f305', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924142 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09c838595ae2844706f1192a15a92e4a94533a069b0ccabf66e6d2798bd8c24e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924151 | orchestrator | ok: [testbed-node-4] => (item={'id': '30cc9d07f920d6a244dcd083a5d66d5fa8f4175f8310d640f13ee03d275fd713', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:19.924158 | orchestrator | ok: [testbed-node-4] => (item={'id': '55ae4e0c1396ee72043a7563a271ba8fd75e3e9a66399eae7d0888c9a9f9d9e9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:19.924164 | orchestrator | skipping: [testbed-node-4] => (item={'id': '85e38cabebc3e34f10a92651b2d4baea47ff6cac879b225497cf5a2f6ab0dd23', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924170 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c1b12d298a8d525a1326c68f5c5b86062a3291fb2673882ae7e42aa322ff10e5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:19.924177 | orchestrator | skipping: [testbed-node-4] => (item={'id': '29dd64bfa034ca87ac9d6f2bc952c4d453e5eb68f143cadd207cc9a28826307e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:19.924206 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9496aa38f5c134b61dfa065904f417201f237882d2365561a61543cbdc89424b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.924213 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b4601ca6b7b107721571123730dff3b33fc8dfe183f390aa6240b4b6defd0d50', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.924220 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'adbe21584ad5b87f7101f56886726c6923b331d9250452ce7c0952c93ae4bd2d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:19.924226 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05153c6b0d496e01066a54807164ca90cccc8bd2df175274cf458db9ef7e0006', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-19 03:42:19.924236 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ce5e3d6d665c4b51b44bcd6d47d5b2578f4db00a9db6a1c417d1a50caafcd542', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.924242 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eafca92d239723ebf08900dc60f8d227f8cab93d5c8250b875ed236059bc0ea0', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-19 03:42:19.924249 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8613ff6e33887622ce1fd9913b95e5f4972692d472317a21183e2d8bb4dc05f', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-19 03:42:19.924255 | orchestrator | skipping: [testbed-node-5] => (item={'id': '58ccccad3128d7c97f54ae20f7c712ff81b9011c8772d23a6e97c6384ce1e437', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.924292 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2af892aa9db1fa58b4f08a601f23f2f845e023ead2a9f22d56ee2c46fa8ceda8', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-19 03:42:19.924303 | orchestrator | skipping: [testbed-node-5] => (item={'id': '502dfaaac559826e4118fe00dccce431b17d7b3c1d6269988598ffdaa79d5e5a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-19 03:42:19.924313 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a541fa13862afefd925213d16a30990849fcc25b9d3f1d8c6e43bfc4a9f678a4', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-03-19 03:42:19.924324 | orchestrator | skipping: [testbed-node-5] => (item={'id': '349ed39d6a7b9303b0194cc9f2d0a72c2062d08843f7b75d5d63d7b95318dd52', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924334 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11c26398cfee85af07928b9760ab38348c28f0eec84f2225eb2d4881bd489a75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924344 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af5abc4518367ceee191e34b59f4868e27d58bd55ec7cac84364182ec5fb9696', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:19.924361 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b429110d441e250c5e29f3429b9ba5cdc8868dcd58ac80b86cd535e614aea7d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:19.924379 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a76945e821cf328adbe979007d98f6bbc1ed33eba38cab90dbc347baae17cec6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-19 03:42:31.202108 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2b4090c3cdbfaabbf383e8a062d99809e6e178d7b1970cc9fc3e7f476096c970', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-19 03:42:31.202245 | orchestrator | skipping: [testbed-node-5] => (item={'id': '005316d0897f533231fe97bb2b03d763ba94b0a79e36fd13e55a56164c895a33', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:31.202266 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7884912b19dd7fd98023fa03a9667a1201b6de719d236e29c88b30e642401e88', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-19 03:42:31.202307 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aa50ebd208eeaab53aeee73b93837074825e0cea3576b4bac7163a557bd9a752', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:31.202328 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c03833bf95eccb85f00d518a31d2f7f68dfd9cb4c34f2eee0485521dd1c3028', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:31.202346 | orchestrator | skipping: [testbed-node-5] => (item={'id': '545e70c5a383e4ab2e2469b7e15d9ddfb72a3ea2288522a2ff55c7775a24b5b5', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-19 03:42:31.202364 | orchestrator | 2026-03-19 03:42:31.202384 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-19 03:42:31.202403 | orchestrator | Thursday 19 March 2026 03:42:19 +0000 (0:00:00.557) 0:00:05.271 ******** 2026-03-19 03:42:31.202422 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.202441 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.202458 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.202474 | orchestrator | 2026-03-19 03:42:31.202490 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-19 03:42:31.202506 | orchestrator | Thursday 19 March 2026 03:42:20 +0000 (0:00:00.318) 0:00:05.589 ******** 2026-03-19 03:42:31.202523 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.202541 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:31.202558 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:31.202575 | orchestrator | 2026-03-19 03:42:31.202619 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-19 03:42:31.202639 | orchestrator | Thursday 19 March 2026 03:42:20 +0000 (0:00:00.501) 0:00:06.090 ******** 2026-03-19 03:42:31.202656 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.202674 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.202692 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.202710 | orchestrator | 2026-03-19 03:42:31.202729 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:42:31.202747 | orchestrator | Thursday 19 March 2026 03:42:21 +0000 (0:00:00.329) 0:00:06.420 ******** 2026-03-19 03:42:31.202796 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.202810 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.202820 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.202831 | orchestrator | 2026-03-19 03:42:31.202842 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-19 03:42:31.202853 | orchestrator | Thursday 19 March 2026 03:42:21 +0000 (0:00:00.278) 0:00:06.698 ******** 2026-03-19 03:42:31.202864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-19 03:42:31.202877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-19 03:42:31.202887 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.202898 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-19 03:42:31.202909 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-19 03:42:31.202920 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:31.202931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-19 03:42:31.202941 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-19 03:42:31.202952 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:31.202963 | orchestrator | 2026-03-19 03:42:31.202974 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-19 03:42:31.202985 | orchestrator | Thursday 19 March 2026 03:42:21 +0000 (0:00:00.321) 0:00:07.020 ******** 2026-03-19 03:42:31.202995 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203006 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.203017 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.203027 | orchestrator | 2026-03-19 03:42:31.203038 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-19 03:42:31.203049 | orchestrator | Thursday 19 March 2026 03:42:22 +0000 (0:00:00.506) 0:00:07.527 ******** 2026-03-19 03:42:31.203060 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203094 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:31.203106 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:31.203119 | orchestrator | 2026-03-19 03:42:31.203138 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-19 03:42:31.203149 | orchestrator | Thursday 19 March 2026 03:42:22 +0000 (0:00:00.297) 0:00:07.825 ******** 2026-03-19 03:42:31.203160 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203171 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:31.203182 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:31.203192 | orchestrator | 2026-03-19 03:42:31.203209 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-19 03:42:31.203223 | orchestrator | Thursday 19 March 2026 03:42:22 +0000 (0:00:00.302) 0:00:08.127 ******** 2026-03-19 03:42:31.203234 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203245 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.203255 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.203266 | orchestrator | 2026-03-19 03:42:31.203277 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:42:31.203288 | orchestrator | Thursday 19 March 2026 03:42:23 +0000 (0:00:00.482) 0:00:08.610 ******** 2026-03-19 03:42:31.203298 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203309 | orchestrator | 2026-03-19 03:42:31.203320 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:42:31.203340 | orchestrator | Thursday 19 March 2026 03:42:23 +0000 (0:00:00.245) 0:00:08.855 ******** 2026-03-19 03:42:31.203351 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203362 | orchestrator | 2026-03-19 03:42:31.203372 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:42:31.203387 | orchestrator | Thursday 19 March 2026 03:42:23 +0000 (0:00:00.241) 0:00:09.097 ******** 2026-03-19 03:42:31.203417 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203436 | orchestrator | 2026-03-19 03:42:31.203454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:31.203472 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.273) 0:00:09.370 ******** 2026-03-19 03:42:31.203490 | orchestrator | 2026-03-19 03:42:31.203508 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:31.203527 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.074) 0:00:09.444 ******** 2026-03-19 03:42:31.203547 | orchestrator | 2026-03-19 03:42:31.203564 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:31.203579 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.071) 0:00:09.516 ******** 2026-03-19 03:42:31.203590 | orchestrator | 2026-03-19 03:42:31.203632 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:42:31.203644 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.074) 0:00:09.591 ******** 2026-03-19 03:42:31.203655 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203665 | orchestrator | 2026-03-19 03:42:31.203676 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-19 03:42:31.203687 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.253) 0:00:09.844 ******** 2026-03-19 03:42:31.203697 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203708 | orchestrator | 2026-03-19 03:42:31.203719 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:42:31.203730 | orchestrator | Thursday 19 March 2026 03:42:24 +0000 (0:00:00.236) 0:00:10.081 ******** 2026-03-19 03:42:31.203740 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203751 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.203762 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.203772 | orchestrator | 2026-03-19 03:42:31.203783 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-19 03:42:31.203794 | orchestrator | Thursday 19 March 2026 03:42:25 +0000 (0:00:00.298) 0:00:10.379 ******** 2026-03-19 03:42:31.203804 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203815 | orchestrator | 2026-03-19 03:42:31.203826 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-19 03:42:31.203837 | orchestrator | Thursday 19 March 2026 03:42:25 +0000 (0:00:00.688) 0:00:11.068 ******** 2026-03-19 03:42:31.203847 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 03:42:31.203858 | orchestrator | 2026-03-19 03:42:31.203869 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-19 03:42:31.203880 | orchestrator | Thursday 19 March 2026 03:42:27 +0000 (0:00:01.704) 0:00:12.772 ******** 2026-03-19 03:42:31.203890 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203901 | orchestrator | 2026-03-19 03:42:31.203912 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-19 03:42:31.203923 | orchestrator | Thursday 19 March 2026 03:42:27 +0000 (0:00:00.138) 0:00:12.911 ******** 2026-03-19 03:42:31.203933 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.203944 | orchestrator | 2026-03-19 03:42:31.203955 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-19 03:42:31.203965 | orchestrator | Thursday 19 March 2026 03:42:27 +0000 (0:00:00.305) 0:00:13.217 ******** 2026-03-19 03:42:31.203976 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:31.203987 | orchestrator | 2026-03-19 03:42:31.203997 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-19 03:42:31.204008 | orchestrator | Thursday 19 March 2026 03:42:27 +0000 (0:00:00.128) 0:00:13.345 ******** 2026-03-19 03:42:31.204019 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.204029 | orchestrator | 2026-03-19 03:42:31.204040 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:42:31.204051 | orchestrator | Thursday 19 March 2026 03:42:28 +0000 (0:00:00.131) 0:00:13.477 ******** 2026-03-19 03:42:31.204071 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:31.204082 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:31.204092 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:31.204103 | orchestrator | 2026-03-19 03:42:31.204113 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-19 03:42:31.204124 | orchestrator | Thursday 19 March 2026 03:42:28 +0000 (0:00:00.300) 0:00:13.778 ******** 2026-03-19 03:42:31.204135 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:42:31.204146 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:42:31.204157 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:42:41.631407 | orchestrator | 2026-03-19 03:42:41.631524 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-19 03:42:41.631537 | orchestrator | Thursday 19 March 2026 03:42:31 +0000 (0:00:02.767) 0:00:16.545 ******** 2026-03-19 03:42:41.631543 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.631586 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.631634 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.631644 | orchestrator | 2026-03-19 03:42:41.631654 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-19 03:42:41.631663 | orchestrator | Thursday 19 March 2026 03:42:31 +0000 (0:00:00.312) 0:00:16.857 ******** 2026-03-19 03:42:41.631669 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.631675 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.631681 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.631687 | orchestrator | 2026-03-19 03:42:41.631693 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-19 03:42:41.631699 | orchestrator | Thursday 19 March 2026 03:42:32 +0000 (0:00:00.523) 0:00:17.381 ******** 2026-03-19 03:42:41.631705 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:41.631712 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:41.631718 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:41.631724 | orchestrator | 2026-03-19 03:42:41.631729 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-19 03:42:41.631736 | orchestrator | Thursday 19 March 2026 03:42:32 +0000 (0:00:00.310) 0:00:17.692 ******** 2026-03-19 03:42:41.631741 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.631747 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.631754 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.631763 | orchestrator | 2026-03-19 03:42:41.631772 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-19 03:42:41.631780 | orchestrator | Thursday 19 March 2026 03:42:32 +0000 (0:00:00.552) 0:00:18.245 ******** 2026-03-19 03:42:41.631788 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:41.631797 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:41.631824 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:41.631834 | orchestrator | 2026-03-19 03:42:41.631844 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-19 03:42:41.631853 | orchestrator | Thursday 19 March 2026 03:42:33 +0000 (0:00:00.324) 0:00:18.570 ******** 2026-03-19 03:42:41.631864 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:41.631875 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:41.631884 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:41.631894 | orchestrator | 2026-03-19 03:42:41.631903 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-19 03:42:41.631913 | orchestrator | Thursday 19 March 2026 03:42:33 +0000 (0:00:00.324) 0:00:18.895 ******** 2026-03-19 03:42:41.631923 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.631932 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.631942 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.631951 | orchestrator | 2026-03-19 03:42:41.631957 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-19 03:42:41.631964 | orchestrator | Thursday 19 March 2026 03:42:34 +0000 (0:00:00.509) 0:00:19.404 ******** 2026-03-19 03:42:41.631972 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.631978 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.632002 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.632009 | orchestrator | 2026-03-19 03:42:41.632016 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-19 03:42:41.632023 | orchestrator | Thursday 19 March 2026 03:42:34 +0000 (0:00:00.744) 0:00:20.149 ******** 2026-03-19 03:42:41.632030 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.632037 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.632043 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.632049 | orchestrator | 2026-03-19 03:42:41.632056 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-19 03:42:41.632062 | orchestrator | Thursday 19 March 2026 03:42:35 +0000 (0:00:00.325) 0:00:20.474 ******** 2026-03-19 03:42:41.632069 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:41.632075 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:42:41.632082 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:42:41.632088 | orchestrator | 2026-03-19 03:42:41.632095 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-19 03:42:41.632101 | orchestrator | Thursday 19 March 2026 03:42:35 +0000 (0:00:00.305) 0:00:20.780 ******** 2026-03-19 03:42:41.632108 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:42:41.632114 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:42:41.632120 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:42:41.632127 | orchestrator | 2026-03-19 03:42:41.632134 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-19 03:42:41.632140 | orchestrator | Thursday 19 March 2026 03:42:35 +0000 (0:00:00.537) 0:00:21.318 ******** 2026-03-19 03:42:41.632147 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:41.632154 | orchestrator | 2026-03-19 03:42:41.632161 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-19 03:42:41.632167 | orchestrator | Thursday 19 March 2026 03:42:36 +0000 (0:00:00.277) 0:00:21.595 ******** 2026-03-19 03:42:41.632174 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:42:41.632180 | orchestrator | 2026-03-19 03:42:41.632187 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-19 03:42:41.632194 | orchestrator | Thursday 19 March 2026 03:42:36 +0000 (0:00:00.262) 0:00:21.857 ******** 2026-03-19 03:42:41.632200 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:41.632207 | orchestrator | 2026-03-19 03:42:41.632215 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-19 03:42:41.632225 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:01.705) 0:00:23.563 ******** 2026-03-19 03:42:41.632235 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:41.632245 | orchestrator | 2026-03-19 03:42:41.632255 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-19 03:42:41.632264 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:00.283) 0:00:23.847 ******** 2026-03-19 03:42:41.632275 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:41.632285 | orchestrator | 2026-03-19 03:42:41.632309 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:41.632317 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:00.257) 0:00:24.105 ******** 2026-03-19 03:42:41.632322 | orchestrator | 2026-03-19 03:42:41.632328 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:41.632333 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:00.071) 0:00:24.177 ******** 2026-03-19 03:42:41.632339 | orchestrator | 2026-03-19 03:42:41.632345 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-19 03:42:41.632350 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:00.070) 0:00:24.248 ******** 2026-03-19 03:42:41.632356 | orchestrator | 2026-03-19 03:42:41.632361 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-19 03:42:41.632367 | orchestrator | Thursday 19 March 2026 03:42:38 +0000 (0:00:00.076) 0:00:24.324 ******** 2026-03-19 03:42:41.632384 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-19 03:42:41.632393 | orchestrator | 2026-03-19 03:42:41.632407 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-19 03:42:41.632420 | orchestrator | Thursday 19 March 2026 03:42:40 +0000 (0:00:01.513) 0:00:25.838 ******** 2026-03-19 03:42:41.632429 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-19 03:42:41.632439 | orchestrator |  "msg": [ 2026-03-19 03:42:41.632456 | orchestrator |  "Validator run completed.", 2026-03-19 03:42:41.632465 | orchestrator |  "You can find the report file here:", 2026-03-19 03:42:41.632475 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-19T03:42:15+00:00-report.json", 2026-03-19 03:42:41.632486 | orchestrator |  "on the following host:", 2026-03-19 03:42:41.632495 | orchestrator |  "testbed-manager" 2026-03-19 03:42:41.632505 | orchestrator |  ] 2026-03-19 03:42:41.632515 | orchestrator | } 2026-03-19 03:42:41.632525 | orchestrator | 2026-03-19 03:42:41.632534 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:42:41.632545 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 03:42:41.632556 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 03:42:41.632565 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-19 03:42:41.632574 | orchestrator | 2026-03-19 03:42:41.632579 | orchestrator | 2026-03-19 03:42:41.632585 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:42:41.632591 | orchestrator | Thursday 19 March 2026 03:42:41 +0000 (0:00:00.830) 0:00:26.668 ******** 2026-03-19 03:42:41.632676 | orchestrator | =============================================================================== 2026-03-19 03:42:41.632682 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.77s 2026-03-19 03:42:41.632688 | orchestrator | Aggregate test results step one ----------------------------------------- 1.71s 2026-03-19 03:42:41.632693 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.70s 2026-03-19 03:42:41.632699 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2026-03-19 03:42:41.632704 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-19 03:42:41.632710 | orchestrator | Print report file information ------------------------------------------- 0.83s 2026-03-19 03:42:41.632716 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.79s 2026-03-19 03:42:41.632721 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.74s 2026-03-19 03:42:41.632727 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-03-19 03:42:41.632733 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.69s 2026-03-19 03:42:41.632738 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.56s 2026-03-19 03:42:41.632744 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.55s 2026-03-19 03:42:41.632750 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2026-03-19 03:42:41.632755 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-03-19 03:42:41.632761 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.52s 2026-03-19 03:42:41.632767 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-03-19 03:42:41.632772 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2026-03-19 03:42:41.632778 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-03-19 03:42:41.632790 | orchestrator | Set test result to passed if all containers are running ----------------- 0.48s 2026-03-19 03:42:41.632796 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.33s 2026-03-19 03:42:41.933052 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-19 03:42:41.942992 | orchestrator | + set -e 2026-03-19 03:42:41.943078 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 03:42:41.944153 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 03:42:41.944390 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 03:42:41.944474 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 03:42:41.944488 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 03:42:41.944498 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 03:42:41.944510 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 03:42:41.944521 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:42:41.944531 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:42:41.944541 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 03:42:41.944552 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 03:42:41.944562 | orchestrator | ++ export ARA=false 2026-03-19 03:42:41.944572 | orchestrator | ++ ARA=false 2026-03-19 03:42:41.944582 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 03:42:41.944592 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 03:42:41.944623 | orchestrator | ++ export TEMPEST=false 2026-03-19 03:42:41.944635 | orchestrator | ++ TEMPEST=false 2026-03-19 03:42:41.944650 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 03:42:41.944664 | orchestrator | ++ IS_ZUUL=true 2026-03-19 03:42:41.944679 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:42:41.944696 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:42:41.944712 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 03:42:41.944726 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 03:42:41.944741 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 03:42:41.944755 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 03:42:41.944770 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 03:42:41.944786 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 03:42:41.944802 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 03:42:41.944817 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 03:42:41.944832 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-19 03:42:41.944846 | orchestrator | + source /etc/os-release 2026-03-19 03:42:41.944860 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-19 03:42:41.944875 | orchestrator | ++ NAME=Ubuntu 2026-03-19 03:42:41.944891 | orchestrator | ++ VERSION_ID=24.04 2026-03-19 03:42:41.944906 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-19 03:42:41.944921 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-19 03:42:41.944934 | orchestrator | ++ ID=ubuntu 2026-03-19 03:42:41.944945 | orchestrator | ++ ID_LIKE=debian 2026-03-19 03:42:41.944955 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-19 03:42:41.944975 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-19 03:42:41.944984 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-19 03:42:41.944995 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-19 03:42:41.945006 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-19 03:42:41.945015 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-19 03:42:41.945025 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-19 03:42:41.945052 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-19 03:42:41.945065 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-19 03:42:41.982332 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-19 03:43:09.222146 | orchestrator | 2026-03-19 03:43:09.222271 | orchestrator | # Status of Elasticsearch 2026-03-19 03:43:09.222297 | orchestrator | 2026-03-19 03:43:09.222313 | orchestrator | + pushd /opt/configuration/contrib 2026-03-19 03:43:09.222331 | orchestrator | + echo 2026-03-19 03:43:09.222348 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-19 03:43:09.222363 | orchestrator | + echo 2026-03-19 03:43:09.222378 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-19 03:43:09.393227 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-19 03:43:09.393334 | orchestrator | 2026-03-19 03:43:09.393346 | orchestrator | # Status of MariaDB 2026-03-19 03:43:09.393354 | orchestrator | 2026-03-19 03:43:09.393362 | orchestrator | + echo 2026-03-19 03:43:09.393371 | orchestrator | + echo '# Status of MariaDB' 2026-03-19 03:43:09.393378 | orchestrator | + echo 2026-03-19 03:43:09.393676 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-19 03:43:09.462313 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 03:43:09.462394 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-19 03:43:09.462422 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-19 03:43:09.462439 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-19 03:43:09.530868 | orchestrator | Reading package lists... 2026-03-19 03:43:09.884467 | orchestrator | Building dependency tree... 2026-03-19 03:43:09.885173 | orchestrator | Reading state information... 2026-03-19 03:43:10.345233 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-19 03:43:10.345310 | orchestrator | bc set to manually installed. 2026-03-19 03:43:10.345320 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-03-19 03:43:11.031392 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-19 03:43:11.032441 | orchestrator | 2026-03-19 03:43:11.032493 | orchestrator | # Status of Prometheus 2026-03-19 03:43:11.032501 | orchestrator | 2026-03-19 03:43:11.032506 | orchestrator | + echo 2026-03-19 03:43:11.032511 | orchestrator | + echo '# Status of Prometheus' 2026-03-19 03:43:11.032516 | orchestrator | + echo 2026-03-19 03:43:11.032521 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-19 03:43:11.100746 | orchestrator | Unauthorized 2026-03-19 03:43:11.101768 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-19 03:43:11.152881 | orchestrator | Unauthorized 2026-03-19 03:43:11.155097 | orchestrator | 2026-03-19 03:43:11.155165 | orchestrator | # Status of RabbitMQ 2026-03-19 03:43:11.155177 | orchestrator | 2026-03-19 03:43:11.155186 | orchestrator | + echo 2026-03-19 03:43:11.155193 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-19 03:43:11.155201 | orchestrator | + echo 2026-03-19 03:43:11.156051 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-19 03:43:11.215652 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 03:43:11.215730 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-19 03:43:11.215741 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-19 03:43:11.711922 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-19 03:43:11.721292 | orchestrator | 2026-03-19 03:43:11.721394 | orchestrator | # Status of Redis 2026-03-19 03:43:11.721414 | orchestrator | 2026-03-19 03:43:11.721430 | orchestrator | + echo 2026-03-19 03:43:11.721446 | orchestrator | + echo '# Status of Redis' 2026-03-19 03:43:11.721463 | orchestrator | + echo 2026-03-19 03:43:11.721474 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-19 03:43:11.725824 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001757s;;;0.000000;10.000000 2026-03-19 03:43:11.725920 | orchestrator | 2026-03-19 03:43:11.725932 | orchestrator | # Create backup of MariaDB database 2026-03-19 03:43:11.725941 | orchestrator | 2026-03-19 03:43:11.725950 | orchestrator | + popd 2026-03-19 03:43:11.725960 | orchestrator | + echo 2026-03-19 03:43:11.725969 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-19 03:43:11.725978 | orchestrator | + echo 2026-03-19 03:43:11.725988 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-19 03:43:13.726588 | orchestrator | 2026-03-19 03:43:13 | INFO  | Task f8bb38c0-20c0-4320-b986-8d177336a30f (mariadb_backup) was prepared for execution. 2026-03-19 03:43:13.726754 | orchestrator | 2026-03-19 03:43:13 | INFO  | It takes a moment until task f8bb38c0-20c0-4320-b986-8d177336a30f (mariadb_backup) has been started and output is visible here. 2026-03-19 03:44:19.992140 | orchestrator | 2026-03-19 03:44:19.992243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 03:44:19.992257 | orchestrator | 2026-03-19 03:44:19.992267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 03:44:19.992277 | orchestrator | Thursday 19 March 2026 03:43:18 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-03-19 03:44:19.992286 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:44:19.992317 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:44:19.992327 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:44:19.992336 | orchestrator | 2026-03-19 03:44:19.992344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 03:44:19.992353 | orchestrator | Thursday 19 March 2026 03:43:18 +0000 (0:00:00.333) 0:00:00.519 ******** 2026-03-19 03:44:19.992362 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-19 03:44:19.992375 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-19 03:44:19.992390 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-19 03:44:19.992404 | orchestrator | 2026-03-19 03:44:19.992418 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-19 03:44:19.992432 | orchestrator | 2026-03-19 03:44:19.992446 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-19 03:44:19.992460 | orchestrator | Thursday 19 March 2026 03:43:19 +0000 (0:00:00.602) 0:00:01.122 ******** 2026-03-19 03:44:19.992473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 03:44:19.992488 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 03:44:19.992503 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 03:44:19.992517 | orchestrator | 2026-03-19 03:44:19.992531 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 03:44:19.992546 | orchestrator | Thursday 19 March 2026 03:43:19 +0000 (0:00:00.415) 0:00:01.537 ******** 2026-03-19 03:44:19.992578 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:44:19.992661 | orchestrator | 2026-03-19 03:44:19.992679 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-19 03:44:19.992695 | orchestrator | Thursday 19 March 2026 03:43:20 +0000 (0:00:00.538) 0:00:02.076 ******** 2026-03-19 03:44:19.992710 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:44:19.992725 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:44:19.992741 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:44:19.992757 | orchestrator | 2026-03-19 03:44:19.992772 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-19 03:44:19.992788 | orchestrator | Thursday 19 March 2026 03:43:23 +0000 (0:00:03.343) 0:00:05.419 ******** 2026-03-19 03:44:19.992804 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-19 03:44:19.992818 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-19 03:44:19.992857 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 03:44:19.992867 | orchestrator | mariadb_bootstrap_restart 2026-03-19 03:44:19.992878 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:44:19.992888 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:44:19.992898 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:44:19.992908 | orchestrator | 2026-03-19 03:44:19.992918 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 03:44:19.992928 | orchestrator | skipping: no hosts matched 2026-03-19 03:44:19.992939 | orchestrator | 2026-03-19 03:44:19.992948 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 03:44:19.992958 | orchestrator | skipping: no hosts matched 2026-03-19 03:44:19.992968 | orchestrator | 2026-03-19 03:44:19.992978 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-19 03:44:19.992988 | orchestrator | skipping: no hosts matched 2026-03-19 03:44:19.992998 | orchestrator | 2026-03-19 03:44:19.993008 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-19 03:44:19.993017 | orchestrator | 2026-03-19 03:44:19.993028 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-19 03:44:19.993038 | orchestrator | Thursday 19 March 2026 03:44:18 +0000 (0:00:55.578) 0:01:00.998 ******** 2026-03-19 03:44:19.993047 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:44:19.993056 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:44:19.993077 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:44:19.993086 | orchestrator | 2026-03-19 03:44:19.993094 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-19 03:44:19.993103 | orchestrator | Thursday 19 March 2026 03:44:19 +0000 (0:00:00.298) 0:01:01.296 ******** 2026-03-19 03:44:19.993112 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:44:19.993120 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:44:19.993129 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:44:19.993137 | orchestrator | 2026-03-19 03:44:19.993146 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:44:19.993156 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:44:19.993165 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 03:44:19.993174 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 03:44:19.993183 | orchestrator | 2026-03-19 03:44:19.993192 | orchestrator | 2026-03-19 03:44:19.993207 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:44:19.993221 | orchestrator | Thursday 19 March 2026 03:44:19 +0000 (0:00:00.393) 0:01:01.690 ******** 2026-03-19 03:44:19.993293 | orchestrator | =============================================================================== 2026-03-19 03:44:19.993311 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 55.58s 2026-03-19 03:44:19.993350 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.34s 2026-03-19 03:44:19.993368 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-03-19 03:44:19.993383 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-03-19 03:44:19.993399 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-03-19 03:44:19.993408 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2026-03-19 03:44:19.993416 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-19 03:44:19.993425 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-03-19 03:44:20.344167 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-19 03:44:20.350269 | orchestrator | + set -e 2026-03-19 03:44:20.350341 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:44:20.350915 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:44:20.351005 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:44:20.351013 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:44:20.351020 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:44:20.351027 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-19 03:44:20.352328 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:44:20.358399 | orchestrator | 2026-03-19 03:44:20.358441 | orchestrator | # OpenStack endpoints 2026-03-19 03:44:20.358448 | orchestrator | 2026-03-19 03:44:20.358452 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:44:20.358456 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:44:20.358460 | orchestrator | + export OS_CLOUD=admin 2026-03-19 03:44:20.358465 | orchestrator | + OS_CLOUD=admin 2026-03-19 03:44:20.358469 | orchestrator | + echo 2026-03-19 03:44:20.358473 | orchestrator | + echo '# OpenStack endpoints' 2026-03-19 03:44:20.358477 | orchestrator | + echo 2026-03-19 03:44:20.358481 | orchestrator | + openstack endpoint list 2026-03-19 03:44:23.557300 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-19 03:44:23.557394 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-19 03:44:23.557408 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-19 03:44:23.557441 | orchestrator | | 038c5e78d13c4542af5aeab127d474e0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-19 03:44:23.557452 | orchestrator | | 03d94d0943cb481199063e30c8195e23 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-19 03:44:23.557461 | orchestrator | | 0413f712454a400499161e1cf4eddc16 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-19 03:44:23.557471 | orchestrator | | 0a750663ed3f4c11b96805bf37b12c8e | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-19 03:44:23.557481 | orchestrator | | 0b1d8619c53c49bbb1922e99219b0dd9 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-19 03:44:23.557491 | orchestrator | | 0e7efb40bfc64efdbc9f06153867c5e1 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-19 03:44:23.557501 | orchestrator | | 14dae708a2d047608b2e36c38cd403b7 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-19 03:44:23.557511 | orchestrator | | 1e018e4c6f964f8e8c18cc2c9c3d5a33 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-19 03:44:23.557520 | orchestrator | | 233ebcfd2dcb488a837ead545af1b5f5 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-19 03:44:23.557530 | orchestrator | | 29908b21a16f4673a8378492227dc4d8 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-19 03:44:23.557555 | orchestrator | | 2f92dfcc7fb5495d8b3b6a6a73749d0b | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-19 03:44:23.557566 | orchestrator | | 5d3e755d0bf8403884ecf3211347a865 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-19 03:44:23.557575 | orchestrator | | 5d97e319c4b84319aa44ac8002172b40 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-19 03:44:23.557585 | orchestrator | | 6061ff918ad24a2787915e902a6018b1 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-19 03:44:23.557595 | orchestrator | | 67350e32c959452facbbc6593910c3b2 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-19 03:44:23.557630 | orchestrator | | 700e0b414df64f0591ace7d54d4cb07b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-19 03:44:23.557640 | orchestrator | | 730e96c5243a41449202c5bef1a2ffbe | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-19 03:44:23.557649 | orchestrator | | 745bd92e3806492487a1a92f7e28324a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-19 03:44:23.557659 | orchestrator | | 81b79336ac764b3cac381e2e12a48f66 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-19 03:44:23.557669 | orchestrator | | 8cb7582a7d57416b8beaf4cf6453f7ac | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-19 03:44:23.557701 | orchestrator | | 8db0224ce6fb43bf8bbc8e2891914654 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-19 03:44:23.557717 | orchestrator | | 99be6f797abb4d7896354b7d38060de4 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-19 03:44:23.557728 | orchestrator | | be0f39731cd940d78bcd32bf8dc5e028 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-19 03:44:23.557737 | orchestrator | | cad34a163f0b4f028fe8405582b1b75a | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-19 03:44:23.557747 | orchestrator | | d26d1a0f46794e868ca0fbe4a45025e5 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-19 03:44:23.557757 | orchestrator | | dc6b04a3a0cf4fc586bb2d7a5c0cfdaa | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-19 03:44:23.557767 | orchestrator | | e0fda73984a54b56b2dabf9161a6aad7 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-19 03:44:23.557776 | orchestrator | | e98220c9c3cd41ac9702e89b0efa920b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-19 03:44:23.557786 | orchestrator | | f0970d832ad14365b1244ba80ba3bb38 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-19 03:44:23.557796 | orchestrator | | f79f8da9f88b436aa531d1858c6d7301 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-19 03:44:23.557806 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-19 03:44:23.804454 | orchestrator | 2026-03-19 03:44:23.804532 | orchestrator | # Cinder 2026-03-19 03:44:23.804541 | orchestrator | 2026-03-19 03:44:23.804548 | orchestrator | + echo 2026-03-19 03:44:23.804555 | orchestrator | + echo '# Cinder' 2026-03-19 03:44:23.804562 | orchestrator | + echo 2026-03-19 03:44:23.804569 | orchestrator | + openstack volume service list 2026-03-19 03:44:26.446699 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:26.446879 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-19 03:44:26.446889 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:26.446894 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446898 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446902 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446906 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446910 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-19T03:44:17.000000 | 2026-03-19 03:44:26.446913 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-19T03:44:17.000000 | 2026-03-19 03:44:26.446917 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-19T03:44:19.000000 | 2026-03-19 03:44:26.446921 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446942 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-19T03:44:21.000000 | 2026-03-19 03:44:26.446946 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:26.701155 | orchestrator | 2026-03-19 03:44:26.701216 | orchestrator | # Neutron 2026-03-19 03:44:26.701221 | orchestrator | 2026-03-19 03:44:26.701226 | orchestrator | + echo 2026-03-19 03:44:26.701230 | orchestrator | + echo '# Neutron' 2026-03-19 03:44:26.701236 | orchestrator | + echo 2026-03-19 03:44:26.701240 | orchestrator | + openstack network agent list 2026-03-19 03:44:29.415348 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-19 03:44:29.415421 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-19 03:44:29.415427 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-19 03:44:29.415432 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415436 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415454 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415458 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415462 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415465 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-19 03:44:29.415469 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-19 03:44:29.415473 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-19 03:44:29.415477 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-19 03:44:29.415481 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-19 03:44:29.680975 | orchestrator | + openstack network service provider list 2026-03-19 03:44:32.272495 | orchestrator | +---------------+------+---------+ 2026-03-19 03:44:32.272568 | orchestrator | | Service Type | Name | Default | 2026-03-19 03:44:32.272574 | orchestrator | +---------------+------+---------+ 2026-03-19 03:44:32.272579 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-19 03:44:32.272583 | orchestrator | +---------------+------+---------+ 2026-03-19 03:44:32.536190 | orchestrator | 2026-03-19 03:44:32.536259 | orchestrator | # Nova 2026-03-19 03:44:32.536265 | orchestrator | 2026-03-19 03:44:32.536269 | orchestrator | + echo 2026-03-19 03:44:32.536274 | orchestrator | + echo '# Nova' 2026-03-19 03:44:32.536278 | orchestrator | + echo 2026-03-19 03:44:32.536282 | orchestrator | + openstack compute service list 2026-03-19 03:44:35.187789 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:35.187884 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-19 03:44:35.187894 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:35.187938 | orchestrator | | 86387392-673c-40e2-8da7-b0e2d18a8beb | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-19T03:44:33.000000 | 2026-03-19 03:44:35.187946 | orchestrator | | cc8b5079-597d-4b57-b340-0b7247bbc0dd | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-19T03:44:25.000000 | 2026-03-19 03:44:35.187953 | orchestrator | | 15cf79e0-65f1-4515-b9d0-1f59a1d44f23 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-19T03:44:25.000000 | 2026-03-19 03:44:35.187959 | orchestrator | | 6d05fbcd-b2ec-46c9-9872-9947943a997a | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-19T03:44:25.000000 | 2026-03-19 03:44:35.187965 | orchestrator | | 3e8846e3-6e0f-4a11-b21b-cd66a777460f | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-19T03:44:26.000000 | 2026-03-19 03:44:35.187971 | orchestrator | | 7e457b0b-ac42-48d3-b828-3f44301d8508 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-19T03:44:27.000000 | 2026-03-19 03:44:35.187977 | orchestrator | | 68e92be3-c945-4e97-b6bd-d7e85bc897df | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-19T03:44:25.000000 | 2026-03-19 03:44:35.187984 | orchestrator | | be6a0e3f-5ed2-4b2b-96b1-7b095c73eab0 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-19T03:44:25.000000 | 2026-03-19 03:44:35.187990 | orchestrator | | 5cf51337-c92f-4b90-9fa6-c3d18df386d5 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-19T03:44:26.000000 | 2026-03-19 03:44:35.187996 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-19 03:44:35.448114 | orchestrator | + openstack hypervisor list 2026-03-19 03:44:38.189770 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-19 03:44:38.189834 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-19 03:44:38.189840 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-19 03:44:38.189844 | orchestrator | | 0856562c-45dd-4e11-b6ca-821364763142 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-19 03:44:38.189848 | orchestrator | | 53be8459-f7a3-4e56-8fb5-05aa66c22f16 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-19 03:44:38.189852 | orchestrator | | 441a3f48-c09b-4774-a03a-5cda0ab24650 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-19 03:44:38.189856 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-19 03:44:38.459892 | orchestrator | 2026-03-19 03:44:38.459958 | orchestrator | # Run OpenStack test play 2026-03-19 03:44:38.459964 | orchestrator | 2026-03-19 03:44:38.459971 | orchestrator | + echo 2026-03-19 03:44:38.459976 | orchestrator | + echo '# Run OpenStack test play' 2026-03-19 03:44:38.459981 | orchestrator | + echo 2026-03-19 03:44:38.459985 | orchestrator | + osism apply --environment openstack test 2026-03-19 03:44:40.441350 | orchestrator | 2026-03-19 03:44:40 | INFO  | Trying to run play test in environment openstack 2026-03-19 03:44:50.538349 | orchestrator | 2026-03-19 03:44:50 | INFO  | Task 798c96fc-7d0e-4922-9815-67069b2f47ca (test) was prepared for execution. 2026-03-19 03:44:50.538424 | orchestrator | 2026-03-19 03:44:50 | INFO  | It takes a moment until task 798c96fc-7d0e-4922-9815-67069b2f47ca (test) has been started and output is visible here. 2026-03-19 03:47:36.121552 | orchestrator | 2026-03-19 03:47:36.121687 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-19 03:47:36.121703 | orchestrator | 2026-03-19 03:47:36.121714 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-19 03:47:36.121723 | orchestrator | Thursday 19 March 2026 03:44:54 +0000 (0:00:00.069) 0:00:00.069 ******** 2026-03-19 03:47:36.121733 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121743 | orchestrator | 2026-03-19 03:47:36.121752 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-19 03:47:36.121782 | orchestrator | Thursday 19 March 2026 03:44:58 +0000 (0:00:03.579) 0:00:03.648 ******** 2026-03-19 03:47:36.121792 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121800 | orchestrator | 2026-03-19 03:47:36.121809 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-19 03:47:36.121818 | orchestrator | Thursday 19 March 2026 03:45:02 +0000 (0:00:04.187) 0:00:07.836 ******** 2026-03-19 03:47:36.121826 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121835 | orchestrator | 2026-03-19 03:47:36.121843 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-19 03:47:36.121852 | orchestrator | Thursday 19 March 2026 03:45:09 +0000 (0:00:07.043) 0:00:14.880 ******** 2026-03-19 03:47:36.121860 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121869 | orchestrator | 2026-03-19 03:47:36.121877 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-19 03:47:36.121886 | orchestrator | Thursday 19 March 2026 03:45:13 +0000 (0:00:03.983) 0:00:18.864 ******** 2026-03-19 03:47:36.121894 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121903 | orchestrator | 2026-03-19 03:47:36.121911 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-19 03:47:36.121920 | orchestrator | Thursday 19 March 2026 03:45:17 +0000 (0:00:04.197) 0:00:23.061 ******** 2026-03-19 03:47:36.121929 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-19 03:47:36.121938 | orchestrator | changed: [localhost] => (item=member) 2026-03-19 03:47:36.121947 | orchestrator | changed: [localhost] => (item=creator) 2026-03-19 03:47:36.121956 | orchestrator | 2026-03-19 03:47:36.121964 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-19 03:47:36.121973 | orchestrator | Thursday 19 March 2026 03:45:29 +0000 (0:00:11.342) 0:00:34.403 ******** 2026-03-19 03:47:36.121982 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.121990 | orchestrator | 2026-03-19 03:47:36.121999 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-19 03:47:36.122007 | orchestrator | Thursday 19 March 2026 03:45:33 +0000 (0:00:04.500) 0:00:38.904 ******** 2026-03-19 03:47:36.122070 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122080 | orchestrator | 2026-03-19 03:47:36.122089 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-19 03:47:36.122099 | orchestrator | Thursday 19 March 2026 03:45:38 +0000 (0:00:04.877) 0:00:43.782 ******** 2026-03-19 03:47:36.122109 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122119 | orchestrator | 2026-03-19 03:47:36.122129 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-19 03:47:36.122139 | orchestrator | Thursday 19 March 2026 03:45:42 +0000 (0:00:04.183) 0:00:47.965 ******** 2026-03-19 03:47:36.122149 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122158 | orchestrator | 2026-03-19 03:47:36.122168 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-19 03:47:36.122178 | orchestrator | Thursday 19 March 2026 03:45:46 +0000 (0:00:03.921) 0:00:51.887 ******** 2026-03-19 03:47:36.122187 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122197 | orchestrator | 2026-03-19 03:47:36.122207 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-19 03:47:36.122218 | orchestrator | Thursday 19 March 2026 03:45:50 +0000 (0:00:04.095) 0:00:55.982 ******** 2026-03-19 03:47:36.122228 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122237 | orchestrator | 2026-03-19 03:47:36.122247 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-19 03:47:36.122257 | orchestrator | Thursday 19 March 2026 03:45:54 +0000 (0:00:04.108) 0:01:00.090 ******** 2026-03-19 03:47:36.122266 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122276 | orchestrator | 2026-03-19 03:47:36.122286 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-19 03:47:36.122296 | orchestrator | Thursday 19 March 2026 03:45:59 +0000 (0:00:04.699) 0:01:04.790 ******** 2026-03-19 03:47:36.122306 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122323 | orchestrator | 2026-03-19 03:47:36.122336 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-19 03:47:36.122350 | orchestrator | Thursday 19 March 2026 03:46:04 +0000 (0:00:05.282) 0:01:10.072 ******** 2026-03-19 03:47:36.122364 | orchestrator | changed: [localhost] 2026-03-19 03:47:36.122377 | orchestrator | 2026-03-19 03:47:36.122395 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-19 03:47:36.122416 | orchestrator | 2026-03-19 03:47:36.122430 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-19 03:47:36.122444 | orchestrator | Thursday 19 March 2026 03:46:16 +0000 (0:00:11.604) 0:01:21.677 ******** 2026-03-19 03:47:36.122459 | orchestrator | ok: [localhost] 2026-03-19 03:47:36.122474 | orchestrator | 2026-03-19 03:47:36.122488 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-19 03:47:36.122502 | orchestrator | Thursday 19 March 2026 03:46:19 +0000 (0:00:03.513) 0:01:25.191 ******** 2026-03-19 03:47:36.122517 | orchestrator | skipping: [localhost] 2026-03-19 03:47:36.122531 | orchestrator | 2026-03-19 03:47:36.122545 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-19 03:47:36.122558 | orchestrator | Thursday 19 March 2026 03:46:19 +0000 (0:00:00.039) 0:01:25.231 ******** 2026-03-19 03:47:36.122588 | orchestrator | skipping: [localhost] 2026-03-19 03:47:36.122602 | orchestrator | 2026-03-19 03:47:36.122665 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-19 03:47:36.122675 | orchestrator | Thursday 19 March 2026 03:46:19 +0000 (0:00:00.048) 0:01:25.279 ******** 2026-03-19 03:47:36.122684 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-19 03:47:36.122694 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-19 03:47:36.122722 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-19 03:47:36.122731 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-19 03:47:36.122740 | orchestrator | skipping: [localhost] => (item=test)  2026-03-19 03:47:36.122749 | orchestrator | skipping: [localhost] 2026-03-19 03:47:36.122757 | orchestrator | 2026-03-19 03:47:36.122766 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-19 03:47:36.122774 | orchestrator | Thursday 19 March 2026 03:46:20 +0000 (0:00:00.152) 0:01:25.432 ******** 2026-03-19 03:47:36.122783 | orchestrator | skipping: [localhost] 2026-03-19 03:47:36.122792 | orchestrator | 2026-03-19 03:47:36.122800 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-19 03:47:36.122809 | orchestrator | Thursday 19 March 2026 03:46:20 +0000 (0:00:00.155) 0:01:25.588 ******** 2026-03-19 03:47:36.122817 | orchestrator | changed: [localhost] => (item=test) 2026-03-19 03:47:36.122826 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-19 03:47:36.122835 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-19 03:47:36.122843 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-19 03:47:36.122852 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-19 03:47:36.122860 | orchestrator | 2026-03-19 03:47:36.122869 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-19 03:47:36.122877 | orchestrator | Thursday 19 March 2026 03:46:24 +0000 (0:00:04.626) 0:01:30.215 ******** 2026-03-19 03:47:36.122886 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-19 03:47:36.122896 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-19 03:47:36.122904 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-19 03:47:36.122913 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-19 03:47:36.122924 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j337720806739.3687', 'results_file': '/ansible/.ansible_async/j337720806739.3687', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.122936 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j70862126492.3712', 'results_file': '/ansible/.ansible_async/j70862126492.3712', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.122951 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j26979916636.3737', 'results_file': '/ansible/.ansible_async/j26979916636.3737', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.122961 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-19 03:47:36.122970 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j443106696225.3762', 'results_file': '/ansible/.ansible_async/j443106696225.3762', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.122979 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j596418125183.3787', 'results_file': '/ansible/.ansible_async/j596418125183.3787', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.122988 | orchestrator | 2026-03-19 03:47:36.122996 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-19 03:47:36.123005 | orchestrator | Thursday 19 March 2026 03:47:22 +0000 (0:00:57.355) 0:02:27.570 ******** 2026-03-19 03:47:36.123014 | orchestrator | changed: [localhost] => (item=test) 2026-03-19 03:47:36.123023 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-19 03:47:36.123031 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-19 03:47:36.123039 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-19 03:47:36.123048 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-19 03:47:36.123057 | orchestrator | 2026-03-19 03:47:36.123065 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-19 03:47:36.123074 | orchestrator | Thursday 19 March 2026 03:47:26 +0000 (0:00:04.522) 0:02:32.092 ******** 2026-03-19 03:47:36.123082 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-19 03:47:36.123092 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j101130600926.3898', 'results_file': '/ansible/.ansible_async/j101130600926.3898', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.123101 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j733779186132.3923', 'results_file': '/ansible/.ansible_async/j733779186132.3923', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.123110 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j435080666556.3948', 'results_file': '/ansible/.ansible_async/j435080666556.3948', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-19 03:47:36.123125 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j558094296411.3973', 'results_file': '/ansible/.ansible_async/j558094296411.3973', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040349 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j971268442899.3998', 'results_file': '/ansible/.ansible_async/j971268442899.3998', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040497 | orchestrator | 2026-03-19 03:48:15.040526 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-19 03:48:15.040544 | orchestrator | Thursday 19 March 2026 03:47:36 +0000 (0:00:09.408) 0:02:41.501 ******** 2026-03-19 03:48:15.040556 | orchestrator | changed: [localhost] => (item=test) 2026-03-19 03:48:15.040569 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-19 03:48:15.040580 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-19 03:48:15.040618 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-19 03:48:15.040743 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-19 03:48:15.040755 | orchestrator | 2026-03-19 03:48:15.040766 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-19 03:48:15.040780 | orchestrator | Thursday 19 March 2026 03:47:40 +0000 (0:00:04.473) 0:02:45.975 ******** 2026-03-19 03:48:15.040799 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-19 03:48:15.040820 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j707718136228.4067', 'results_file': '/ansible/.ansible_async/j707718136228.4067', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040840 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j115299241507.4092', 'results_file': '/ansible/.ansible_async/j115299241507.4092', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040877 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j617856600085.4118', 'results_file': '/ansible/.ansible_async/j617856600085.4118', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040897 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j403902013431.4151', 'results_file': '/ansible/.ansible_async/j403902013431.4151', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040915 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j796950188770.4177', 'results_file': '/ansible/.ansible_async/j796950188770.4177', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-19 03:48:15.040933 | orchestrator | 2026-03-19 03:48:15.040951 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-19 03:48:15.040968 | orchestrator | Thursday 19 March 2026 03:47:50 +0000 (0:00:09.458) 0:02:55.433 ******** 2026-03-19 03:48:15.040985 | orchestrator | changed: [localhost] 2026-03-19 03:48:15.041003 | orchestrator | 2026-03-19 03:48:15.041019 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-19 03:48:15.041036 | orchestrator | Thursday 19 March 2026 03:47:56 +0000 (0:00:06.198) 0:03:01.632 ******** 2026-03-19 03:48:15.041052 | orchestrator | changed: [localhost] 2026-03-19 03:48:15.041070 | orchestrator | 2026-03-19 03:48:15.041087 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-19 03:48:15.041104 | orchestrator | Thursday 19 March 2026 03:48:09 +0000 (0:00:13.324) 0:03:14.956 ******** 2026-03-19 03:48:15.041124 | orchestrator | ok: [localhost] 2026-03-19 03:48:15.041142 | orchestrator | 2026-03-19 03:48:15.041160 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-19 03:48:15.041178 | orchestrator | Thursday 19 March 2026 03:48:14 +0000 (0:00:05.174) 0:03:20.130 ******** 2026-03-19 03:48:15.041195 | orchestrator | ok: [localhost] => { 2026-03-19 03:48:15.041214 | orchestrator |  "msg": "192.168.112.161" 2026-03-19 03:48:15.041232 | orchestrator | } 2026-03-19 03:48:15.041251 | orchestrator | 2026-03-19 03:48:15.041269 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:48:15.041287 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 03:48:15.041306 | orchestrator | 2026-03-19 03:48:15.041324 | orchestrator | 2026-03-19 03:48:15.041342 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:48:15.041359 | orchestrator | Thursday 19 March 2026 03:48:14 +0000 (0:00:00.042) 0:03:20.173 ******** 2026-03-19 03:48:15.041378 | orchestrator | =============================================================================== 2026-03-19 03:48:15.041406 | orchestrator | Wait for instance creation to complete --------------------------------- 57.36s 2026-03-19 03:48:15.041442 | orchestrator | Attach test volume ----------------------------------------------------- 13.32s 2026-03-19 03:48:15.041461 | orchestrator | Create test router ----------------------------------------------------- 11.60s 2026-03-19 03:48:15.041478 | orchestrator | Add member roles to user test ------------------------------------------ 11.34s 2026-03-19 03:48:15.041495 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.46s 2026-03-19 03:48:15.041511 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.41s 2026-03-19 03:48:15.041529 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.04s 2026-03-19 03:48:15.041611 | orchestrator | Create test volume ------------------------------------------------------ 6.20s 2026-03-19 03:48:15.041661 | orchestrator | Create test subnet ------------------------------------------------------ 5.28s 2026-03-19 03:48:15.041679 | orchestrator | Create floating ip address ---------------------------------------------- 5.17s 2026-03-19 03:48:15.041696 | orchestrator | Create ssh security group ----------------------------------------------- 4.88s 2026-03-19 03:48:15.041713 | orchestrator | Create test network ----------------------------------------------------- 4.70s 2026-03-19 03:48:15.041732 | orchestrator | Create test instances --------------------------------------------------- 4.63s 2026-03-19 03:48:15.041750 | orchestrator | Add metadata to instances ----------------------------------------------- 4.52s 2026-03-19 03:48:15.041768 | orchestrator | Create test server group ------------------------------------------------ 4.50s 2026-03-19 03:48:15.041787 | orchestrator | Add tag to instances ---------------------------------------------------- 4.47s 2026-03-19 03:48:15.041806 | orchestrator | Create test user -------------------------------------------------------- 4.20s 2026-03-19 03:48:15.041825 | orchestrator | Create test-admin user -------------------------------------------------- 4.19s 2026-03-19 03:48:15.041845 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.18s 2026-03-19 03:48:15.041864 | orchestrator | Create test keypair ----------------------------------------------------- 4.11s 2026-03-19 03:48:15.365399 | orchestrator | + server_list 2026-03-19 03:48:15.365474 | orchestrator | + openstack --os-cloud test server list 2026-03-19 03:48:19.100774 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-19 03:48:19.100880 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-19 03:48:19.100892 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-19 03:48:19.100901 | orchestrator | | 86675254-a3a7-4119-ae6b-8496690563f4 | test-3 | ACTIVE | test=192.168.112.149, 192.168.200.20 | N/A (booted from volume) | SCS-1L-1 | 2026-03-19 03:48:19.100910 | orchestrator | | c388a885-f6f8-429c-a7e0-3206a53ae481 | test-4 | ACTIVE | test=192.168.112.155, 192.168.200.53 | N/A (booted from volume) | SCS-1L-1 | 2026-03-19 03:48:19.100918 | orchestrator | | a4744a02-eb59-4c1b-b43f-cd9ac52436f5 | test-1 | ACTIVE | test=192.168.112.169, 192.168.200.153 | N/A (booted from volume) | SCS-1L-1 | 2026-03-19 03:48:19.100926 | orchestrator | | affc44aa-0da9-405c-b7e0-ffd9d48fcdeb | test-2 | ACTIVE | test=192.168.112.127, 192.168.200.56 | N/A (booted from volume) | SCS-1L-1 | 2026-03-19 03:48:19.100934 | orchestrator | | d62c9a01-a077-4da4-93d3-6c7974b6876d | test | ACTIVE | test=192.168.112.161, 192.168.200.248 | N/A (booted from volume) | SCS-1L-1 | 2026-03-19 03:48:19.100942 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-19 03:48:19.388822 | orchestrator | + openstack --os-cloud test server show test 2026-03-19 03:48:22.547916 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:22.548103 | orchestrator | | Field | Value | 2026-03-19 03:48:22.548146 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:22.548167 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-19 03:48:22.548226 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-19 03:48:22.548250 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-19 03:48:22.548271 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-19 03:48:22.548293 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-19 03:48:22.548314 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-19 03:48:22.548352 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-19 03:48:22.548377 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-19 03:48:22.548394 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-19 03:48:22.548422 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-19 03:48:22.548441 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-19 03:48:22.548461 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-19 03:48:22.548481 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-19 03:48:22.548500 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-19 03:48:22.548518 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-19 03:48:22.548537 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-19T03:46:56.000000 | 2026-03-19 03:48:22.548568 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-19 03:48:22.548612 | orchestrator | | accessIPv4 | | 2026-03-19 03:48:22.548667 | orchestrator | | accessIPv6 | | 2026-03-19 03:48:22.548696 | orchestrator | | addresses | test=192.168.112.161, 192.168.200.248 | 2026-03-19 03:48:22.548717 | orchestrator | | config_drive | | 2026-03-19 03:48:22.548736 | orchestrator | | created | 2026-03-19T03:46:29Z | 2026-03-19 03:48:22.548756 | orchestrator | | description | None | 2026-03-19 03:48:22.548775 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-19 03:48:22.548794 | orchestrator | | hostId | 732a430475750f543aad8839a36b15aa74e9c8f99f27d1d1e1585190 | 2026-03-19 03:48:22.548814 | orchestrator | | host_status | None | 2026-03-19 03:48:22.548858 | orchestrator | | id | d62c9a01-a077-4da4-93d3-6c7974b6876d | 2026-03-19 03:48:22.548880 | orchestrator | | image | N/A (booted from volume) | 2026-03-19 03:48:22.548899 | orchestrator | | key_name | test | 2026-03-19 03:48:22.548926 | orchestrator | | locked | False | 2026-03-19 03:48:22.548945 | orchestrator | | locked_reason | None | 2026-03-19 03:48:22.548965 | orchestrator | | name | test | 2026-03-19 03:48:22.548985 | orchestrator | | pinned_availability_zone | None | 2026-03-19 03:48:22.549003 | orchestrator | | progress | 0 | 2026-03-19 03:48:22.549023 | orchestrator | | project_id | 42ab47695de54fc6bea17ce3f2d5218c | 2026-03-19 03:48:22.549052 | orchestrator | | properties | hostname='test' | 2026-03-19 03:48:22.549072 | orchestrator | | security_groups | name='ssh' | 2026-03-19 03:48:22.549084 | orchestrator | | | name='icmp' | 2026-03-19 03:48:22.549095 | orchestrator | | server_groups | None | 2026-03-19 03:48:22.549106 | orchestrator | | status | ACTIVE | 2026-03-19 03:48:22.549131 | orchestrator | | tags | test | 2026-03-19 03:48:22.549143 | orchestrator | | trusted_image_certificates | None | 2026-03-19 03:48:22.549154 | orchestrator | | updated | 2026-03-19T03:47:28Z | 2026-03-19 03:48:22.549165 | orchestrator | | user_id | 896afb4497f243c68c343b6c47443049 | 2026-03-19 03:48:22.549183 | orchestrator | | volumes_attached | delete_on_termination='True', id='ed2800d3-9cb7-40cc-a324-707a84d22851' | 2026-03-19 03:48:22.549194 | orchestrator | | | delete_on_termination='False', id='09eca6ff-1255-4482-94c4-54a5e5529096' | 2026-03-19 03:48:22.550532 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:22.812302 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-19 03:48:25.797838 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:25.797932 | orchestrator | | Field | Value | 2026-03-19 03:48:25.797952 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:25.797962 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-19 03:48:25.797970 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-19 03:48:25.797979 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-19 03:48:25.797987 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-19 03:48:25.798071 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-19 03:48:25.798084 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-19 03:48:25.798107 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-19 03:48:25.798116 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-19 03:48:25.798124 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-19 03:48:25.798136 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-19 03:48:25.798145 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-19 03:48:25.798153 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-19 03:48:25.798161 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-19 03:48:25.798176 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-19 03:48:25.798185 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-19 03:48:25.798193 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-19T03:46:56.000000 | 2026-03-19 03:48:25.798207 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-19 03:48:25.798215 | orchestrator | | accessIPv4 | | 2026-03-19 03:48:25.798223 | orchestrator | | accessIPv6 | | 2026-03-19 03:48:25.798235 | orchestrator | | addresses | test=192.168.112.169, 192.168.200.153 | 2026-03-19 03:48:25.798244 | orchestrator | | config_drive | | 2026-03-19 03:48:25.798252 | orchestrator | | created | 2026-03-19T03:46:30Z | 2026-03-19 03:48:25.798265 | orchestrator | | description | None | 2026-03-19 03:48:25.798273 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-19 03:48:25.798281 | orchestrator | | hostId | 732a430475750f543aad8839a36b15aa74e9c8f99f27d1d1e1585190 | 2026-03-19 03:48:25.798289 | orchestrator | | host_status | None | 2026-03-19 03:48:25.798303 | orchestrator | | id | a4744a02-eb59-4c1b-b43f-cd9ac52436f5 | 2026-03-19 03:48:25.798312 | orchestrator | | image | N/A (booted from volume) | 2026-03-19 03:48:25.798320 | orchestrator | | key_name | test | 2026-03-19 03:48:25.798331 | orchestrator | | locked | False | 2026-03-19 03:48:25.798339 | orchestrator | | locked_reason | None | 2026-03-19 03:48:25.798352 | orchestrator | | name | test-1 | 2026-03-19 03:48:25.798360 | orchestrator | | pinned_availability_zone | None | 2026-03-19 03:48:25.798368 | orchestrator | | progress | 0 | 2026-03-19 03:48:25.798377 | orchestrator | | project_id | 42ab47695de54fc6bea17ce3f2d5218c | 2026-03-19 03:48:25.798387 | orchestrator | | properties | hostname='test-1' | 2026-03-19 03:48:25.798401 | orchestrator | | security_groups | name='ssh' | 2026-03-19 03:48:25.798411 | orchestrator | | | name='icmp' | 2026-03-19 03:48:25.798425 | orchestrator | | server_groups | None | 2026-03-19 03:48:25.798434 | orchestrator | | status | ACTIVE | 2026-03-19 03:48:25.798444 | orchestrator | | tags | test | 2026-03-19 03:48:25.798458 | orchestrator | | trusted_image_certificates | None | 2026-03-19 03:48:25.798468 | orchestrator | | updated | 2026-03-19T03:47:28Z | 2026-03-19 03:48:25.798477 | orchestrator | | user_id | 896afb4497f243c68c343b6c47443049 | 2026-03-19 03:48:25.798487 | orchestrator | | volumes_attached | delete_on_termination='True', id='314f12d5-eab5-4b96-8b76-bfbf4be4a7d6' | 2026-03-19 03:48:25.801206 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:26.055777 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-19 03:48:29.082286 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:29.082425 | orchestrator | | Field | Value | 2026-03-19 03:48:29.082442 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:29.082455 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-19 03:48:29.082492 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-19 03:48:29.082505 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-19 03:48:29.082516 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-19 03:48:29.082544 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-19 03:48:29.082556 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-19 03:48:29.082591 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-19 03:48:29.082612 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-19 03:48:29.082659 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-19 03:48:29.082688 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-19 03:48:29.082722 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-19 03:48:29.082743 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-19 03:48:29.082763 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-19 03:48:29.082781 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-19 03:48:29.082798 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-19 03:48:29.082812 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-19T03:46:56.000000 | 2026-03-19 03:48:29.082835 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-19 03:48:29.082848 | orchestrator | | accessIPv4 | | 2026-03-19 03:48:29.082862 | orchestrator | | accessIPv6 | | 2026-03-19 03:48:29.082887 | orchestrator | | addresses | test=192.168.112.127, 192.168.200.56 | 2026-03-19 03:48:29.082901 | orchestrator | | config_drive | | 2026-03-19 03:48:29.082915 | orchestrator | | created | 2026-03-19T03:46:30Z | 2026-03-19 03:48:29.082928 | orchestrator | | description | None | 2026-03-19 03:48:29.082940 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-19 03:48:29.082953 | orchestrator | | hostId | 732a430475750f543aad8839a36b15aa74e9c8f99f27d1d1e1585190 | 2026-03-19 03:48:29.082966 | orchestrator | | host_status | None | 2026-03-19 03:48:29.082987 | orchestrator | | id | affc44aa-0da9-405c-b7e0-ffd9d48fcdeb | 2026-03-19 03:48:29.083002 | orchestrator | | image | N/A (booted from volume) | 2026-03-19 03:48:29.083022 | orchestrator | | key_name | test | 2026-03-19 03:48:29.083040 | orchestrator | | locked | False | 2026-03-19 03:48:29.083054 | orchestrator | | locked_reason | None | 2026-03-19 03:48:29.083067 | orchestrator | | name | test-2 | 2026-03-19 03:48:29.083081 | orchestrator | | pinned_availability_zone | None | 2026-03-19 03:48:29.083095 | orchestrator | | progress | 0 | 2026-03-19 03:48:29.083109 | orchestrator | | project_id | 42ab47695de54fc6bea17ce3f2d5218c | 2026-03-19 03:48:29.083120 | orchestrator | | properties | hostname='test-2' | 2026-03-19 03:48:29.083139 | orchestrator | | security_groups | name='ssh' | 2026-03-19 03:48:29.083151 | orchestrator | | | name='icmp' | 2026-03-19 03:48:29.083169 | orchestrator | | server_groups | None | 2026-03-19 03:48:29.083185 | orchestrator | | status | ACTIVE | 2026-03-19 03:48:29.083196 | orchestrator | | tags | test | 2026-03-19 03:48:29.083207 | orchestrator | | trusted_image_certificates | None | 2026-03-19 03:48:29.083218 | orchestrator | | updated | 2026-03-19T03:47:29Z | 2026-03-19 03:48:29.083229 | orchestrator | | user_id | 896afb4497f243c68c343b6c47443049 | 2026-03-19 03:48:29.083240 | orchestrator | | volumes_attached | delete_on_termination='True', id='379cb484-6efa-4e11-a035-e809327aba0b' | 2026-03-19 03:48:29.086291 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:29.343543 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-19 03:48:32.291314 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:32.291503 | orchestrator | | Field | Value | 2026-03-19 03:48:32.291535 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:32.291574 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-19 03:48:32.291595 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-19 03:48:32.291615 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-19 03:48:32.291703 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-19 03:48:32.291726 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-19 03:48:32.291745 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-19 03:48:32.291789 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-19 03:48:32.291830 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-19 03:48:32.291852 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-19 03:48:32.291882 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-19 03:48:32.291904 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-19 03:48:32.291926 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-19 03:48:32.291948 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-19 03:48:32.291969 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-19 03:48:32.291990 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-19 03:48:32.292011 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-19T03:46:59.000000 | 2026-03-19 03:48:32.292055 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-19 03:48:32.292077 | orchestrator | | accessIPv4 | | 2026-03-19 03:48:32.292097 | orchestrator | | accessIPv6 | | 2026-03-19 03:48:32.292118 | orchestrator | | addresses | test=192.168.112.149, 192.168.200.20 | 2026-03-19 03:48:32.292138 | orchestrator | | config_drive | | 2026-03-19 03:48:32.292158 | orchestrator | | created | 2026-03-19T03:46:33Z | 2026-03-19 03:48:32.292177 | orchestrator | | description | None | 2026-03-19 03:48:32.292197 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-19 03:48:32.292217 | orchestrator | | hostId | bc84d9552822a74ecfcd34e3a70a5db1c57a4d13e300ca6a3806b307 | 2026-03-19 03:48:32.292247 | orchestrator | | host_status | None | 2026-03-19 03:48:32.292882 | orchestrator | | id | 86675254-a3a7-4119-ae6b-8496690563f4 | 2026-03-19 03:48:32.292939 | orchestrator | | image | N/A (booted from volume) | 2026-03-19 03:48:32.292964 | orchestrator | | key_name | test | 2026-03-19 03:48:32.292985 | orchestrator | | locked | False | 2026-03-19 03:48:32.293003 | orchestrator | | locked_reason | None | 2026-03-19 03:48:32.293021 | orchestrator | | name | test-3 | 2026-03-19 03:48:32.293039 | orchestrator | | pinned_availability_zone | None | 2026-03-19 03:48:32.293058 | orchestrator | | progress | 0 | 2026-03-19 03:48:32.293077 | orchestrator | | project_id | 42ab47695de54fc6bea17ce3f2d5218c | 2026-03-19 03:48:32.293113 | orchestrator | | properties | hostname='test-3' | 2026-03-19 03:48:32.293159 | orchestrator | | security_groups | name='ssh' | 2026-03-19 03:48:32.293180 | orchestrator | | | name='icmp' | 2026-03-19 03:48:32.293199 | orchestrator | | server_groups | None | 2026-03-19 03:48:32.293218 | orchestrator | | status | ACTIVE | 2026-03-19 03:48:32.293238 | orchestrator | | tags | test | 2026-03-19 03:48:32.293257 | orchestrator | | trusted_image_certificates | None | 2026-03-19 03:48:32.293277 | orchestrator | | updated | 2026-03-19T03:47:30Z | 2026-03-19 03:48:32.293296 | orchestrator | | user_id | 896afb4497f243c68c343b6c47443049 | 2026-03-19 03:48:32.293327 | orchestrator | | volumes_attached | delete_on_termination='True', id='44527f4f-c40f-42ed-8c2b-f4f805c6567a' | 2026-03-19 03:48:32.295848 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:32.545513 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-19 03:48:35.521781 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:35.521874 | orchestrator | | Field | Value | 2026-03-19 03:48:35.521889 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:35.521900 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-19 03:48:35.521912 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-19 03:48:35.521923 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-19 03:48:35.521935 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-19 03:48:35.521968 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-19 03:48:35.521980 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-19 03:48:35.522008 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-19 03:48:35.522089 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-19 03:48:35.522115 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-19 03:48:35.522135 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-19 03:48:35.522153 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-19 03:48:35.522172 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-19 03:48:35.522190 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-19 03:48:35.522223 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-19 03:48:35.522245 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-19 03:48:35.522263 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-19T03:46:58.000000 | 2026-03-19 03:48:35.522296 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-19 03:48:35.522314 | orchestrator | | accessIPv4 | | 2026-03-19 03:48:35.522326 | orchestrator | | accessIPv6 | | 2026-03-19 03:48:35.522338 | orchestrator | | addresses | test=192.168.112.155, 192.168.200.53 | 2026-03-19 03:48:35.522349 | orchestrator | | config_drive | | 2026-03-19 03:48:35.522360 | orchestrator | | created | 2026-03-19T03:46:32Z | 2026-03-19 03:48:35.522379 | orchestrator | | description | None | 2026-03-19 03:48:35.522390 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-19 03:48:35.522401 | orchestrator | | hostId | 732a430475750f543aad8839a36b15aa74e9c8f99f27d1d1e1585190 | 2026-03-19 03:48:35.522412 | orchestrator | | host_status | None | 2026-03-19 03:48:35.522430 | orchestrator | | id | c388a885-f6f8-429c-a7e0-3206a53ae481 | 2026-03-19 03:48:35.522447 | orchestrator | | image | N/A (booted from volume) | 2026-03-19 03:48:35.522458 | orchestrator | | key_name | test | 2026-03-19 03:48:35.522469 | orchestrator | | locked | False | 2026-03-19 03:48:35.522480 | orchestrator | | locked_reason | None | 2026-03-19 03:48:35.522491 | orchestrator | | name | test-4 | 2026-03-19 03:48:35.522509 | orchestrator | | pinned_availability_zone | None | 2026-03-19 03:48:35.522521 | orchestrator | | progress | 0 | 2026-03-19 03:48:35.522532 | orchestrator | | project_id | 42ab47695de54fc6bea17ce3f2d5218c | 2026-03-19 03:48:35.522543 | orchestrator | | properties | hostname='test-4' | 2026-03-19 03:48:35.522562 | orchestrator | | security_groups | name='ssh' | 2026-03-19 03:48:35.522580 | orchestrator | | | name='icmp' | 2026-03-19 03:48:35.522592 | orchestrator | | server_groups | None | 2026-03-19 03:48:35.522603 | orchestrator | | status | ACTIVE | 2026-03-19 03:48:35.522614 | orchestrator | | tags | test | 2026-03-19 03:48:35.522702 | orchestrator | | trusted_image_certificates | None | 2026-03-19 03:48:35.522717 | orchestrator | | updated | 2026-03-19T03:47:31Z | 2026-03-19 03:48:35.522728 | orchestrator | | user_id | 896afb4497f243c68c343b6c47443049 | 2026-03-19 03:48:35.522739 | orchestrator | | volumes_attached | delete_on_termination='True', id='df77fbb1-e837-4dc4-9ab1-97afa4b1b997' | 2026-03-19 03:48:35.525960 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-19 03:48:35.765188 | orchestrator | + server_ping 2026-03-19 03:48:35.766288 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-19 03:48:35.766311 | orchestrator | ++ tr -d '\r' 2026-03-19 03:48:38.622334 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-19 03:48:38.622430 | orchestrator | + ping -c3 192.168.112.155 2026-03-19 03:48:38.641959 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2026-03-19 03:48:38.642125 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=10.6 ms 2026-03-19 03:48:39.636618 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=3.38 ms 2026-03-19 03:48:40.637442 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.26 ms 2026-03-19 03:48:40.637547 | orchestrator | 2026-03-19 03:48:40.637564 | orchestrator | --- 192.168.112.155 ping statistics --- 2026-03-19 03:48:40.637576 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-19 03:48:40.637586 | orchestrator | rtt min/avg/max/mdev = 2.264/5.425/10.629/3.707 ms 2026-03-19 03:48:40.638167 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-19 03:48:40.638202 | orchestrator | + ping -c3 192.168.112.149 2026-03-19 03:48:40.652137 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-03-19 03:48:40.652228 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=8.91 ms 2026-03-19 03:48:41.647477 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.84 ms 2026-03-19 03:48:42.647914 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=1.74 ms 2026-03-19 03:48:42.648054 | orchestrator | 2026-03-19 03:48:42.648073 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-03-19 03:48:42.648086 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-19 03:48:42.648097 | orchestrator | rtt min/avg/max/mdev = 1.743/4.498/8.911/3.152 ms 2026-03-19 03:48:42.648454 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-19 03:48:42.648479 | orchestrator | + ping -c3 192.168.112.161 2026-03-19 03:48:42.663364 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-03-19 03:48:42.663457 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=10.1 ms 2026-03-19 03:48:43.657164 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.78 ms 2026-03-19 03:48:44.658014 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=2.08 ms 2026-03-19 03:48:44.658362 | orchestrator | 2026-03-19 03:48:44.658388 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-03-19 03:48:44.658401 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-19 03:48:44.658413 | orchestrator | rtt min/avg/max/mdev = 2.077/4.978/10.073/3.614 ms 2026-03-19 03:48:44.658437 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-19 03:48:44.658449 | orchestrator | + ping -c3 192.168.112.127 2026-03-19 03:48:44.671057 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-03-19 03:48:44.671154 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.65 ms 2026-03-19 03:48:45.668123 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.79 ms 2026-03-19 03:48:46.669944 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.24 ms 2026-03-19 03:48:46.670094 | orchestrator | 2026-03-19 03:48:46.670107 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-03-19 03:48:46.670115 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-19 03:48:46.670122 | orchestrator | rtt min/avg/max/mdev = 2.244/4.225/7.648/2.430 ms 2026-03-19 03:48:46.670610 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-19 03:48:46.670621 | orchestrator | + ping -c3 192.168.112.169 2026-03-19 03:48:46.685212 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-03-19 03:48:46.685279 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=10.1 ms 2026-03-19 03:48:47.678980 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.80 ms 2026-03-19 03:48:48.681209 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.51 ms 2026-03-19 03:48:48.681341 | orchestrator | 2026-03-19 03:48:48.681366 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-03-19 03:48:48.681378 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-19 03:48:48.681389 | orchestrator | rtt min/avg/max/mdev = 2.511/5.148/10.130/3.524 ms 2026-03-19 03:48:48.681399 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-19 03:48:48.872858 | orchestrator | ok: Runtime: 0:08:50.417326 2026-03-19 03:48:48.929017 | 2026-03-19 03:48:48.929167 | TASK [Run tempest] 2026-03-19 03:48:49.462442 | orchestrator | skipping: Conditional result was False 2026-03-19 03:48:49.479901 | 2026-03-19 03:48:49.480071 | TASK [Check prometheus alert status] 2026-03-19 03:48:50.015751 | orchestrator | skipping: Conditional result was False 2026-03-19 03:48:50.023296 | 2026-03-19 03:48:50.023405 | PLAY [Upgrade testbed] 2026-03-19 03:48:50.033396 | 2026-03-19 03:48:50.033555 | TASK [Print next ceph version] 2026-03-19 03:48:50.113796 | orchestrator | ok 2026-03-19 03:48:50.124916 | 2026-03-19 03:48:50.125085 | TASK [Print next openstack version] 2026-03-19 03:48:50.236064 | orchestrator | ok 2026-03-19 03:48:50.247682 | 2026-03-19 03:48:50.247814 | TASK [Print next manager version] 2026-03-19 03:48:50.316803 | orchestrator | ok 2026-03-19 03:48:50.326740 | 2026-03-19 03:48:50.326897 | TASK [Set cloud fact (Zuul deployment)] 2026-03-19 03:48:50.374605 | orchestrator | ok 2026-03-19 03:48:50.385303 | 2026-03-19 03:48:50.385434 | TASK [Set cloud fact (local deployment)] 2026-03-19 03:48:50.431222 | orchestrator | skipping: Conditional result was False 2026-03-19 03:48:50.446410 | 2026-03-19 03:48:50.446617 | TASK [Fetch manager address] 2026-03-19 03:48:50.749152 | orchestrator | ok 2026-03-19 03:48:50.760965 | 2026-03-19 03:48:50.761108 | TASK [Set manager_host address] 2026-03-19 03:48:50.832565 | orchestrator | ok 2026-03-19 03:48:50.844193 | 2026-03-19 03:48:50.844316 | TASK [Run upgrade] 2026-03-19 03:48:51.510936 | orchestrator | + set -e 2026-03-19 03:48:51.511074 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:48:51.511087 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:48:51.511099 | orchestrator | + CEPH_VERSION=reef 2026-03-19 03:48:51.511106 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-19 03:48:51.511112 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-19 03:48:51.511125 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-19 03:48:51.519980 | orchestrator | + set -e 2026-03-19 03:48:51.520057 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:48:51.520065 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:48:51.520075 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:48:51.520080 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:48:51.520089 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:48:51.521355 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-19 03:48:51.554940 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-19 03:48:51.555637 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-19 03:48:51.595577 | orchestrator | 2026-03-19 03:48:51.595673 | orchestrator | # UPGRADE MANAGER 2026-03-19 03:48:51.595683 | orchestrator | 2026-03-19 03:48:51.595688 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-19 03:48:51.595693 | orchestrator | + echo 2026-03-19 03:48:51.595697 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-19 03:48:51.595703 | orchestrator | + echo 2026-03-19 03:48:51.595707 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:48:51.595712 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:48:51.595716 | orchestrator | + CEPH_VERSION=reef 2026-03-19 03:48:51.595720 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-19 03:48:51.595724 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-19 03:48:51.595736 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-19 03:48:51.603513 | orchestrator | + set -e 2026-03-19 03:48:51.603551 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-19 03:48:51.603557 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:48:51.608835 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-19 03:48:51.608927 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:48:51.613327 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:48:51.618847 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-19 03:48:51.627983 | orchestrator | /opt/configuration ~ 2026-03-19 03:48:51.628059 | orchestrator | + set -e 2026-03-19 03:48:51.628071 | orchestrator | + pushd /opt/configuration 2026-03-19 03:48:51.628081 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 03:48:51.628091 | orchestrator | + source /opt/venv/bin/activate 2026-03-19 03:48:51.629398 | orchestrator | ++ deactivate nondestructive 2026-03-19 03:48:51.629455 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:51.629462 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:51.629468 | orchestrator | ++ hash -r 2026-03-19 03:48:51.629474 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:51.629479 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-19 03:48:51.629484 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-19 03:48:51.629517 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-19 03:48:51.629525 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-19 03:48:51.629530 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-19 03:48:51.629535 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-19 03:48:51.629584 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-19 03:48:51.629591 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:51.629597 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:51.629603 | orchestrator | ++ export PATH 2026-03-19 03:48:51.629610 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:51.629705 | orchestrator | ++ '[' -z '' ']' 2026-03-19 03:48:51.629745 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-19 03:48:51.629752 | orchestrator | ++ PS1='(venv) ' 2026-03-19 03:48:51.629757 | orchestrator | ++ export PS1 2026-03-19 03:48:51.629763 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-19 03:48:51.629768 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-19 03:48:51.629853 | orchestrator | ++ hash -r 2026-03-19 03:48:51.630258 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-19 03:48:52.793537 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-19 03:48:52.794392 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-19 03:48:52.795678 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-19 03:48:52.797042 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-19 03:48:52.798258 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-19 03:48:52.808703 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-19 03:48:52.810101 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-19 03:48:52.811497 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-19 03:48:52.812835 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-19 03:48:52.844606 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-19 03:48:52.846338 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-19 03:48:52.848342 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-19 03:48:52.849435 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-19 03:48:52.853626 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-19 03:48:53.085909 | orchestrator | ++ which gilt 2026-03-19 03:48:53.088800 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-19 03:48:53.088856 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-19 03:48:53.303195 | orchestrator | osism.cfg-generics: 2026-03-19 03:48:53.408197 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-19 03:48:53.409190 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-19 03:48:53.411021 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-19 03:48:53.411059 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-19 03:48:54.489171 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-19 03:48:54.499140 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-19 03:48:54.880338 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-19 03:48:54.930073 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 03:48:54.930157 | orchestrator | + deactivate 2026-03-19 03:48:54.930167 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-19 03:48:54.930176 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:54.930183 | orchestrator | + export PATH 2026-03-19 03:48:54.930190 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-19 03:48:54.930197 | orchestrator | + '[' -n '' ']' 2026-03-19 03:48:54.930204 | orchestrator | + hash -r 2026-03-19 03:48:54.930210 | orchestrator | + '[' -n '' ']' 2026-03-19 03:48:54.930217 | orchestrator | + unset VIRTUAL_ENV 2026-03-19 03:48:54.930223 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-19 03:48:54.930230 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-19 03:48:54.930236 | orchestrator | + unset -f deactivate 2026-03-19 03:48:54.930253 | orchestrator | ~ 2026-03-19 03:48:54.930260 | orchestrator | + popd 2026-03-19 03:48:54.932088 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-19 03:48:54.932127 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-19 03:48:54.936381 | orchestrator | + set -e 2026-03-19 03:48:54.936430 | orchestrator | + NAMESPACE=kolla/release 2026-03-19 03:48:54.936440 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-19 03:48:54.946699 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-19 03:48:54.953865 | orchestrator | /opt/configuration ~ 2026-03-19 03:48:54.953962 | orchestrator | + set -e 2026-03-19 03:48:54.953981 | orchestrator | + pushd /opt/configuration 2026-03-19 03:48:54.953995 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 03:48:54.954060 | orchestrator | + source /opt/venv/bin/activate 2026-03-19 03:48:54.954078 | orchestrator | ++ deactivate nondestructive 2026-03-19 03:48:54.954092 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:54.954112 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:54.954125 | orchestrator | ++ hash -r 2026-03-19 03:48:54.954137 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:54.954150 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-19 03:48:54.954163 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-19 03:48:54.954183 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-19 03:48:54.954199 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-19 03:48:54.954211 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-19 03:48:54.954230 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-19 03:48:54.954249 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-19 03:48:54.954265 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:54.954284 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:54.954300 | orchestrator | ++ export PATH 2026-03-19 03:48:54.954311 | orchestrator | ++ '[' -n '' ']' 2026-03-19 03:48:54.954484 | orchestrator | ++ '[' -z '' ']' 2026-03-19 03:48:54.954504 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-19 03:48:54.954516 | orchestrator | ++ PS1='(venv) ' 2026-03-19 03:48:54.954527 | orchestrator | ++ export PS1 2026-03-19 03:48:54.954538 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-19 03:48:54.954559 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-19 03:48:54.954571 | orchestrator | ++ hash -r 2026-03-19 03:48:54.954582 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-19 03:48:55.514359 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-19 03:48:55.515173 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-19 03:48:55.516686 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-19 03:48:55.518220 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-19 03:48:55.519387 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-19 03:48:55.529521 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-19 03:48:55.531229 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-19 03:48:55.532332 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-19 03:48:55.533842 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-19 03:48:55.570277 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-19 03:48:55.571786 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-19 03:48:55.573503 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-19 03:48:55.574953 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-19 03:48:55.579089 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-19 03:48:55.801238 | orchestrator | ++ which gilt 2026-03-19 03:48:55.804026 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-19 03:48:55.804077 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-19 03:48:55.960739 | orchestrator | osism.cfg-generics: 2026-03-19 03:48:56.013090 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-19 03:48:56.013193 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-19 03:48:56.013209 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-19 03:48:56.013222 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-19 03:48:56.647900 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-19 03:48:56.659760 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-19 03:48:57.135858 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-19 03:48:57.182886 | orchestrator | ~ 2026-03-19 03:48:57.182979 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-19 03:48:57.182994 | orchestrator | + deactivate 2026-03-19 03:48:57.183027 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-19 03:48:57.183039 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-19 03:48:57.183049 | orchestrator | + export PATH 2026-03-19 03:48:57.183060 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-19 03:48:57.183070 | orchestrator | + '[' -n '' ']' 2026-03-19 03:48:57.183080 | orchestrator | + hash -r 2026-03-19 03:48:57.183089 | orchestrator | + '[' -n '' ']' 2026-03-19 03:48:57.183100 | orchestrator | + unset VIRTUAL_ENV 2026-03-19 03:48:57.183110 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-19 03:48:57.183120 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-19 03:48:57.183130 | orchestrator | + unset -f deactivate 2026-03-19 03:48:57.183140 | orchestrator | + popd 2026-03-19 03:48:57.184241 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-19 03:48:57.236233 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-19 03:48:57.236767 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-19 03:48:57.333435 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 03:48:57.333533 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-19 03:48:57.339234 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-19 03:48:57.345702 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-19 03:48:57.409494 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-19 03:48:57.410145 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-19 03:48:57.519487 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-19 03:48:57.519634 | orchestrator | ++ echo true 2026-03-19 03:48:57.519767 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-19 03:48:57.522156 | orchestrator | +++ semver 2024.2 2024.2 2026-03-19 03:48:57.606904 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-19 03:48:57.607514 | orchestrator | +++ semver 2024.2 2025.1 2026-03-19 03:48:57.679199 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-19 03:48:57.679272 | orchestrator | ++ echo false 2026-03-19 03:48:57.680368 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-19 03:48:57.680394 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 03:48:57.680405 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-19 03:48:57.680417 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-19 03:48:57.680428 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:48:57.686419 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-19 03:48:57.686549 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-19 03:48:57.709773 | orchestrator | export RABBITMQ3TO4=true 2026-03-19 03:48:57.713726 | orchestrator | + osism update manager 2026-03-19 03:49:03.330916 | orchestrator | Collecting uv 2026-03-19 03:49:03.529644 | orchestrator | Downloading uv-0.10.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-19 03:49:03.551775 | orchestrator | Downloading uv-0.10.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.6 MB) 2026-03-19 03:49:04.424309 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.6/23.6 MB 33.0 MB/s eta 0:00:00 2026-03-19 03:49:04.485522 | orchestrator | Installing collected packages: uv 2026-03-19 03:49:04.928392 | orchestrator | Successfully installed uv-0.10.11 2026-03-19 03:49:05.551541 | orchestrator | Resolved 11 packages in 304ms 2026-03-19 03:49:05.584924 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-19 03:49:05.585584 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-19 03:49:05.585813 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-19 03:49:05.725228 | orchestrator | Downloading ansible (54.5MiB) 2026-03-19 03:49:05.885372 | orchestrator | Downloaded netaddr 2026-03-19 03:49:06.006604 | orchestrator | Downloaded cryptography 2026-03-19 03:49:06.181643 | orchestrator | Downloaded ansible-core 2026-03-19 03:49:13.448023 | orchestrator | Downloaded ansible 2026-03-19 03:49:13.448132 | orchestrator | Prepared 11 packages in 7.89s 2026-03-19 03:49:13.991973 | orchestrator | Installed 11 packages in 542ms 2026-03-19 03:49:13.992071 | orchestrator | + ansible==11.11.0 2026-03-19 03:49:13.992087 | orchestrator | + ansible-core==2.18.14 2026-03-19 03:49:13.992099 | orchestrator | + cffi==2.0.0 2026-03-19 03:49:13.992111 | orchestrator | + cryptography==46.0.5 2026-03-19 03:49:13.992123 | orchestrator | + jinja2==3.1.6 2026-03-19 03:49:13.992134 | orchestrator | + markupsafe==3.0.3 2026-03-19 03:49:13.992146 | orchestrator | + netaddr==1.3.0 2026-03-19 03:49:13.992156 | orchestrator | + packaging==26.0 2026-03-19 03:49:13.992167 | orchestrator | + pycparser==3.0 2026-03-19 03:49:13.992178 | orchestrator | + pyyaml==6.0.3 2026-03-19 03:49:13.992189 | orchestrator | + resolvelib==1.0.1 2026-03-19 03:49:15.099076 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-1992323e26eza1/tmpfk52sp9r/ansible-collection-services9he_mmxm'... 2026-03-19 03:49:16.807367 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-19 03:49:16.807504 | orchestrator | Already on 'main' 2026-03-19 03:49:17.288588 | orchestrator | Starting galaxy collection install process 2026-03-19 03:49:17.288794 | orchestrator | Process install dependency map 2026-03-19 03:49:17.288821 | orchestrator | Starting collection install process 2026-03-19 03:49:17.288839 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-19 03:49:17.288859 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-19 03:49:17.288877 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-19 03:49:17.812352 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-199257q23ljsb5/tmpku2x93b4/ansible-playbooks-manageri_92wgpj'... 2026-03-19 03:49:18.389889 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-19 03:49:18.389958 | orchestrator | Already on 'main' 2026-03-19 03:49:18.662919 | orchestrator | Starting galaxy collection install process 2026-03-19 03:49:18.663015 | orchestrator | Process install dependency map 2026-03-19 03:49:18.663029 | orchestrator | Starting collection install process 2026-03-19 03:49:18.663038 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-19 03:49:18.663049 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-19 03:49:18.663055 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-19 03:49:19.347490 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-19 03:49:19.347581 | orchestrator | -vvvv to see details 2026-03-19 03:49:19.738229 | orchestrator | 2026-03-19 03:49:19.738299 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-19 03:49:19.738307 | orchestrator | 2026-03-19 03:49:19.738312 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-19 03:49:23.804913 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:23.805027 | orchestrator | 2026-03-19 03:49:23.805043 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-19 03:49:23.867220 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 03:49:23.867377 | orchestrator | 2026-03-19 03:49:23.867435 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-19 03:49:25.621604 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:25.622267 | orchestrator | 2026-03-19 03:49:25.622301 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-19 03:49:25.682368 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:25.682497 | orchestrator | 2026-03-19 03:49:25.682523 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-19 03:49:25.757335 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-19 03:49:25.757441 | orchestrator | 2026-03-19 03:49:25.757457 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-19 03:49:29.908488 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-19 03:49:29.908581 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-19 03:49:29.908592 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-19 03:49:29.908612 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-19 03:49:29.908620 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-19 03:49:29.908629 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-19 03:49:29.908637 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-19 03:49:29.908645 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-19 03:49:29.908654 | orchestrator | 2026-03-19 03:49:29.908662 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-19 03:49:31.009200 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:31.009311 | orchestrator | 2026-03-19 03:49:31.009331 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-19 03:49:31.877028 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:31.877126 | orchestrator | 2026-03-19 03:49:31.877141 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-19 03:49:31.973281 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-19 03:49:31.973362 | orchestrator | 2026-03-19 03:49:31.973373 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-19 03:49:33.818255 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-19 03:49:33.818362 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-19 03:49:33.818377 | orchestrator | 2026-03-19 03:49:33.818390 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-19 03:49:34.841210 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:34.841305 | orchestrator | 2026-03-19 03:49:34.841318 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-19 03:49:34.903269 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:49:34.903352 | orchestrator | 2026-03-19 03:49:34.903365 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-19 03:49:34.988602 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-19 03:49:34.988786 | orchestrator | 2026-03-19 03:49:34.988805 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-19 03:49:35.957934 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:35.958099 | orchestrator | 2026-03-19 03:49:35.958118 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-19 03:49:36.037550 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-19 03:49:36.037742 | orchestrator | 2026-03-19 03:49:36.037775 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-19 03:49:37.931670 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-19 03:49:37.931890 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-19 03:49:37.931907 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:37.931920 | orchestrator | 2026-03-19 03:49:37.931933 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-19 03:49:38.789921 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:38.790007 | orchestrator | 2026-03-19 03:49:38.790060 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-19 03:49:38.847909 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:49:38.848007 | orchestrator | 2026-03-19 03:49:38.848022 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-19 03:49:38.945864 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-19 03:49:38.945986 | orchestrator | 2026-03-19 03:49:38.946000 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-19 03:49:39.635273 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:39.635413 | orchestrator | 2026-03-19 03:49:39.635434 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-19 03:49:40.169573 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:40.169646 | orchestrator | 2026-03-19 03:49:40.169654 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-19 03:49:41.940500 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-19 03:49:41.940621 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-19 03:49:41.940646 | orchestrator | 2026-03-19 03:49:41.940666 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-19 03:49:43.097354 | orchestrator | changed: [testbed-manager] 2026-03-19 03:49:43.097455 | orchestrator | 2026-03-19 03:49:43.097471 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-19 03:49:43.666371 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:43.666460 | orchestrator | 2026-03-19 03:49:43.666473 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-19 03:49:44.238242 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:44.238365 | orchestrator | 2026-03-19 03:49:44.238414 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-19 03:49:44.300636 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:49:44.300824 | orchestrator | 2026-03-19 03:49:44.300856 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-19 03:49:44.383774 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-19 03:49:44.383851 | orchestrator | 2026-03-19 03:49:44.383860 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-19 03:49:44.455555 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:44.455664 | orchestrator | 2026-03-19 03:49:44.455712 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-19 03:49:47.412383 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-19 03:49:47.412471 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-19 03:49:47.412480 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-19 03:49:47.412486 | orchestrator | 2026-03-19 03:49:47.412493 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-19 03:49:48.436761 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:48.436859 | orchestrator | 2026-03-19 03:49:48.436875 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-19 03:49:49.413597 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:49.413806 | orchestrator | 2026-03-19 03:49:49.413828 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-19 03:49:50.393344 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:50.394251 | orchestrator | 2026-03-19 03:49:50.394305 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-19 03:49:50.483220 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-19 03:49:50.483351 | orchestrator | 2026-03-19 03:49:50.483369 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-19 03:49:50.547089 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:50.547204 | orchestrator | 2026-03-19 03:49:50.547229 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-19 03:49:51.562292 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-19 03:49:51.562389 | orchestrator | 2026-03-19 03:49:51.562404 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-19 03:49:51.662522 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-19 03:49:51.662620 | orchestrator | 2026-03-19 03:49:51.662635 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-19 03:49:52.688848 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:52.688942 | orchestrator | 2026-03-19 03:49:52.688955 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-19 03:49:53.808902 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:53.809008 | orchestrator | 2026-03-19 03:49:53.809024 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-19 03:49:53.887297 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:49:53.887407 | orchestrator | 2026-03-19 03:49:53.887424 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-19 03:49:53.949399 | orchestrator | ok: [testbed-manager] 2026-03-19 03:49:53.949498 | orchestrator | 2026-03-19 03:49:53.949514 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-19 03:49:55.323691 | orchestrator | changed: [testbed-manager] 2026-03-19 03:49:55.323804 | orchestrator | 2026-03-19 03:49:55.323816 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-19 03:51:05.389451 | orchestrator | changed: [testbed-manager] 2026-03-19 03:51:05.389575 | orchestrator | 2026-03-19 03:51:05.389594 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-19 03:51:06.646504 | orchestrator | ok: [testbed-manager] 2026-03-19 03:51:06.646624 | orchestrator | 2026-03-19 03:51:06.646640 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-19 03:51:06.717809 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:51:06.717905 | orchestrator | 2026-03-19 03:51:06.717921 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-19 03:51:07.803427 | orchestrator | ok: [testbed-manager] 2026-03-19 03:51:07.803532 | orchestrator | 2026-03-19 03:51:07.803547 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-19 03:51:07.882585 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:51:07.882713 | orchestrator | 2026-03-19 03:51:07.882731 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 03:51:07.882786 | orchestrator | 2026-03-19 03:51:07.882799 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-19 03:51:27.120690 | orchestrator | changed: [testbed-manager] 2026-03-19 03:51:27.120834 | orchestrator | 2026-03-19 03:51:27.120850 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-19 03:52:27.192377 | orchestrator | Pausing for 60 seconds 2026-03-19 03:52:27.192476 | orchestrator | changed: [testbed-manager] 2026-03-19 03:52:27.192487 | orchestrator | 2026-03-19 03:52:27.192495 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-19 03:52:27.251970 | orchestrator | ok: [testbed-manager] 2026-03-19 03:52:27.252068 | orchestrator | 2026-03-19 03:52:27.252078 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-19 03:52:31.320505 | orchestrator | changed: [testbed-manager] 2026-03-19 03:52:31.320610 | orchestrator | 2026-03-19 03:52:31.320628 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-19 03:53:33.957948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-19 03:53:33.958139 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-19 03:53:33.958158 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-19 03:53:33.958172 | orchestrator | changed: [testbed-manager] 2026-03-19 03:53:33.958185 | orchestrator | 2026-03-19 03:53:33.958197 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-19 03:53:45.285996 | orchestrator | changed: [testbed-manager] 2026-03-19 03:53:45.286123 | orchestrator | 2026-03-19 03:53:45.286134 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-19 03:53:45.365248 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-19 03:53:45.365351 | orchestrator | 2026-03-19 03:53:45.365361 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-19 03:53:45.365369 | orchestrator | 2026-03-19 03:53:45.365375 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-19 03:53:45.419778 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:53:45.419910 | orchestrator | 2026-03-19 03:53:45.419924 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-19 03:53:45.497020 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-19 03:53:45.497145 | orchestrator | 2026-03-19 03:53:45.497171 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-19 03:53:46.605488 | orchestrator | changed: [testbed-manager] 2026-03-19 03:53:46.605575 | orchestrator | 2026-03-19 03:53:46.605588 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-19 03:53:50.159047 | orchestrator | ok: [testbed-manager] 2026-03-19 03:53:50.159157 | orchestrator | 2026-03-19 03:53:50.159173 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-19 03:53:50.249311 | orchestrator | ok: [testbed-manager] => { 2026-03-19 03:53:50.249423 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-19 03:53:50.249440 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-19 03:53:50.249452 | orchestrator | "Checking running containers against expected versions...", 2026-03-19 03:53:50.249464 | orchestrator | "", 2026-03-19 03:53:50.249476 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-19 03:53:50.249492 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-19 03:53:50.249511 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.249529 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-19 03:53:50.249547 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.249566 | orchestrator | "", 2026-03-19 03:53:50.249579 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-19 03:53:50.249590 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-19 03:53:50.249601 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.249613 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-19 03:53:50.249632 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.249651 | orchestrator | "", 2026-03-19 03:53:50.249669 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-19 03:53:50.249688 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-19 03:53:50.249708 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.249726 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-19 03:53:50.249742 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.249753 | orchestrator | "", 2026-03-19 03:53:50.249764 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-19 03:53:50.249775 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-19 03:53:50.249786 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.249797 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-19 03:53:50.249808 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.249818 | orchestrator | "", 2026-03-19 03:53:50.249830 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-19 03:53:50.249847 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-19 03:53:50.249866 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.249917 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-19 03:53:50.249930 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.249943 | orchestrator | "", 2026-03-19 03:53:50.249956 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-19 03:53:50.249995 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250007 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250116 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250136 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250155 | orchestrator | "", 2026-03-19 03:53:50.250175 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-19 03:53:50.250192 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 03:53:50.250209 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250228 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-19 03:53:50.250246 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250265 | orchestrator | "", 2026-03-19 03:53:50.250277 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-19 03:53:50.250288 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 03:53:50.250298 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250321 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-19 03:53:50.250333 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250343 | orchestrator | "", 2026-03-19 03:53:50.250354 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-19 03:53:50.250365 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-19 03:53:50.250382 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250398 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-19 03:53:50.250409 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250420 | orchestrator | "", 2026-03-19 03:53:50.250435 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-19 03:53:50.250447 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 03:53:50.250458 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250469 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-19 03:53:50.250487 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250515 | orchestrator | "", 2026-03-19 03:53:50.250535 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-19 03:53:50.250553 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250570 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250587 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250605 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250622 | orchestrator | "", 2026-03-19 03:53:50.250639 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-19 03:53:50.250657 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250676 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250695 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250713 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250731 | orchestrator | "", 2026-03-19 03:53:50.250749 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-19 03:53:50.250766 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250783 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250801 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250819 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.250837 | orchestrator | "", 2026-03-19 03:53:50.250855 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-19 03:53:50.250939 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.250961 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.250980 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.251030 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.251049 | orchestrator | "", 2026-03-19 03:53:50.251066 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-19 03:53:50.251084 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.251121 | orchestrator | " Enabled: true", 2026-03-19 03:53:50.251140 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-19 03:53:50.251159 | orchestrator | " Status: ✅ MATCH", 2026-03-19 03:53:50.251177 | orchestrator | "", 2026-03-19 03:53:50.251195 | orchestrator | "=== Summary ===", 2026-03-19 03:53:50.251213 | orchestrator | "Errors (version mismatches): 0", 2026-03-19 03:53:50.251231 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-19 03:53:50.251248 | orchestrator | "", 2026-03-19 03:53:50.251265 | orchestrator | "✅ All running containers match expected versions!" 2026-03-19 03:53:50.251281 | orchestrator | ] 2026-03-19 03:53:50.251297 | orchestrator | } 2026-03-19 03:53:50.251313 | orchestrator | 2026-03-19 03:53:50.251329 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-19 03:53:50.303915 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:53:50.303994 | orchestrator | 2026-03-19 03:53:50.304002 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:53:50.304009 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-19 03:53:50.304015 | orchestrator | 2026-03-19 03:54:02.815464 | orchestrator | 2026-03-19 03:54:02 | INFO  | Task f255d55f-ec84-43fb-b94d-17e2fe242216 (sync inventory) is running in background. Output coming soon. 2026-03-19 03:54:32.334304 | orchestrator | 2026-03-19 03:54:04 | INFO  | Starting group_vars file reorganization 2026-03-19 03:54:32.334420 | orchestrator | 2026-03-19 03:54:04 | INFO  | Moved 0 file(s) to their respective directories 2026-03-19 03:54:32.334437 | orchestrator | 2026-03-19 03:54:04 | INFO  | Group_vars file reorganization completed 2026-03-19 03:54:32.334468 | orchestrator | 2026-03-19 03:54:07 | INFO  | Starting variable preparation from inventory 2026-03-19 03:54:32.334481 | orchestrator | 2026-03-19 03:54:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-19 03:54:32.334492 | orchestrator | 2026-03-19 03:54:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-19 03:54:32.334503 | orchestrator | 2026-03-19 03:54:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-19 03:54:32.334515 | orchestrator | 2026-03-19 03:54:10 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-19 03:54:32.334526 | orchestrator | 2026-03-19 03:54:10 | INFO  | Variable preparation completed 2026-03-19 03:54:32.334537 | orchestrator | 2026-03-19 03:54:12 | INFO  | Starting inventory overwrite handling 2026-03-19 03:54:32.334548 | orchestrator | 2026-03-19 03:54:12 | INFO  | Handling group overwrites in 99-overwrite 2026-03-19 03:54:32.334559 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removing group frr:children from 60-generic 2026-03-19 03:54:32.334570 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-19 03:54:32.334581 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-19 03:54:32.334593 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-19 03:54:32.334603 | orchestrator | 2026-03-19 03:54:12 | INFO  | Handling group overwrites in 20-roles 2026-03-19 03:54:32.334614 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-19 03:54:32.334626 | orchestrator | 2026-03-19 03:54:12 | INFO  | Removed 5 group(s) in total 2026-03-19 03:54:32.334637 | orchestrator | 2026-03-19 03:54:12 | INFO  | Inventory overwrite handling completed 2026-03-19 03:54:32.334648 | orchestrator | 2026-03-19 03:54:13 | INFO  | Starting merge of inventory files 2026-03-19 03:54:32.334659 | orchestrator | 2026-03-19 03:54:13 | INFO  | Inventory files merged successfully 2026-03-19 03:54:32.334692 | orchestrator | 2026-03-19 03:54:18 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-19 03:54:32.334704 | orchestrator | 2026-03-19 03:54:30 | INFO  | Successfully wrote ClusterShell configuration 2026-03-19 03:54:32.631437 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 03:54:32.631533 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-19 03:54:32.631547 | orchestrator | + local max_attempts=60 2026-03-19 03:54:32.631560 | orchestrator | + local name=kolla-ansible 2026-03-19 03:54:32.631571 | orchestrator | + local attempt_num=1 2026-03-19 03:54:32.632015 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-19 03:54:32.669706 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 03:54:32.669818 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-19 03:54:32.669831 | orchestrator | + local max_attempts=60 2026-03-19 03:54:32.669840 | orchestrator | + local name=osism-ansible 2026-03-19 03:54:32.669849 | orchestrator | + local attempt_num=1 2026-03-19 03:54:32.670886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-19 03:54:32.720288 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-19 03:54:32.720389 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-19 03:54:32.906536 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-19 03:54:32.906625 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-19 03:54:32.906638 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-19 03:54:32.906647 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-19 03:54:32.906659 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-19 03:54:32.906668 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-19 03:54:32.906676 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-19 03:54:32.906683 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-19 03:54:32.906691 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 21 seconds ago 2026-03-19 03:54:32.906699 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-19 03:54:32.906707 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-19 03:54:32.906715 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-19 03:54:32.906723 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-19 03:54:32.906752 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-19 03:54:32.906761 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-19 03:54:32.906769 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-19 03:54:32.911139 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-19 03:54:32.911199 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-19 03:54:32.911208 | orchestrator | + osism apply facts 2026-03-19 03:54:45.101134 | orchestrator | 2026-03-19 03:54:45 | INFO  | Task 626d77de-b480-4294-9394-cb1fba09b498 (facts) was prepared for execution. 2026-03-19 03:54:45.101248 | orchestrator | 2026-03-19 03:54:45 | INFO  | It takes a moment until task 626d77de-b480-4294-9394-cb1fba09b498 (facts) has been started and output is visible here. 2026-03-19 03:55:08.445788 | orchestrator | 2026-03-19 03:55:08.446001 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-19 03:55:08.446115 | orchestrator | 2026-03-19 03:55:08.446135 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-19 03:55:08.446151 | orchestrator | Thursday 19 March 2026 03:54:51 +0000 (0:00:01.972) 0:00:01.972 ******** 2026-03-19 03:55:08.446169 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:08.446187 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:55:08.446204 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:55:08.446220 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:55:08.446237 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:55:08.446253 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:55:08.446270 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:55:08.446286 | orchestrator | 2026-03-19 03:55:08.446304 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-19 03:55:08.446322 | orchestrator | Thursday 19 March 2026 03:54:55 +0000 (0:00:03.572) 0:00:05.545 ******** 2026-03-19 03:55:08.446339 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:55:08.446358 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:55:08.446375 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:55:08.446393 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:55:08.446410 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:55:08.446427 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:55:08.446445 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:55:08.446462 | orchestrator | 2026-03-19 03:55:08.446479 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-19 03:55:08.446496 | orchestrator | 2026-03-19 03:55:08.446513 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-19 03:55:08.446529 | orchestrator | Thursday 19 March 2026 03:54:57 +0000 (0:00:02.657) 0:00:08.202 ******** 2026-03-19 03:55:08.446545 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:55:08.446586 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:55:08.446605 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:08.446624 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:55:08.446649 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:55:08.446668 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:55:08.446686 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:55:08.446704 | orchestrator | 2026-03-19 03:55:08.446722 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-19 03:55:08.446740 | orchestrator | 2026-03-19 03:55:08.446758 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-19 03:55:08.446775 | orchestrator | Thursday 19 March 2026 03:55:05 +0000 (0:00:07.273) 0:00:15.476 ******** 2026-03-19 03:55:08.446792 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:55:08.446842 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:55:08.446860 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:55:08.446877 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:55:08.446895 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:55:08.446912 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:55:08.446953 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:55:08.446972 | orchestrator | 2026-03-19 03:55:08.446988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:55:08.447006 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447025 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447042 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447060 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447077 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447094 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447110 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:08.447127 | orchestrator | 2026-03-19 03:55:08.447145 | orchestrator | 2026-03-19 03:55:08.447163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:55:08.447180 | orchestrator | Thursday 19 March 2026 03:55:07 +0000 (0:00:02.819) 0:00:18.296 ******** 2026-03-19 03:55:08.447198 | orchestrator | =============================================================================== 2026-03-19 03:55:08.447215 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.27s 2026-03-19 03:55:08.447234 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.57s 2026-03-19 03:55:08.447251 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.82s 2026-03-19 03:55:08.447269 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.66s 2026-03-19 03:55:08.752107 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-19 03:55:08.849307 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 03:55:08.850093 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-19 03:55:08.895993 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-19 03:55:08.896070 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-19 03:55:08.903213 | orchestrator | + set -e 2026-03-19 03:55:08.903307 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-19 03:55:08.903325 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-19 03:55:08.912557 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-19 03:55:08.923889 | orchestrator | 2026-03-19 03:55:08.923996 | orchestrator | # UPGRADE SERVICES 2026-03-19 03:55:08.924009 | orchestrator | 2026-03-19 03:55:08.924020 | orchestrator | + set -e 2026-03-19 03:55:08.924030 | orchestrator | + echo 2026-03-19 03:55:08.924040 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-19 03:55:08.924050 | orchestrator | + echo 2026-03-19 03:55:08.924060 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 03:55:08.925040 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 03:55:08.925063 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 03:55:08.925073 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 03:55:08.925082 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 03:55:08.925092 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 03:55:08.925104 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 03:55:08.925114 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:55:08.925151 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:55:08.925162 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 03:55:08.925171 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 03:55:08.925181 | orchestrator | ++ export ARA=false 2026-03-19 03:55:08.925190 | orchestrator | ++ ARA=false 2026-03-19 03:55:08.925200 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 03:55:08.925210 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 03:55:08.925219 | orchestrator | ++ export TEMPEST=false 2026-03-19 03:55:08.925229 | orchestrator | ++ TEMPEST=false 2026-03-19 03:55:08.925238 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 03:55:08.925248 | orchestrator | ++ IS_ZUUL=true 2026-03-19 03:55:08.925257 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:55:08.925267 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:55:08.925277 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 03:55:08.925293 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 03:55:08.925310 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 03:55:08.925326 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 03:55:08.925351 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 03:55:08.925370 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 03:55:08.925386 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 03:55:08.925402 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 03:55:08.925418 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-19 03:55:08.925433 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-19 03:55:08.925450 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-19 03:55:08.925466 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-19 03:55:08.925484 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-19 03:55:08.931323 | orchestrator | + set -e 2026-03-19 03:55:08.931403 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:55:08.932108 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:55:08.932143 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:55:08.932154 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:55:08.932165 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:55:08.932176 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 03:55:08.932186 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 03:55:08.932197 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 03:55:08.932208 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 03:55:08.932219 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 03:55:08.932230 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 03:55:08.932241 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 03:55:08.932273 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 03:55:08.932285 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 03:55:08.932296 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 03:55:08.932307 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 03:55:08.932317 | orchestrator | ++ export ARA=false 2026-03-19 03:55:08.932328 | orchestrator | ++ ARA=false 2026-03-19 03:55:08.932339 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 03:55:08.932350 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 03:55:08.932361 | orchestrator | ++ export TEMPEST=false 2026-03-19 03:55:08.932372 | orchestrator | ++ TEMPEST=false 2026-03-19 03:55:08.932382 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 03:55:08.932393 | orchestrator | ++ IS_ZUUL=true 2026-03-19 03:55:08.932404 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:55:08.932423 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 03:55:08.932445 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 03:55:08.932471 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 03:55:08.932489 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 03:55:08.932507 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 03:55:08.932523 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 03:55:08.932540 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 03:55:08.932555 | orchestrator | 2026-03-19 03:55:08.932571 | orchestrator | # PULL IMAGES 2026-03-19 03:55:08.932586 | orchestrator | 2026-03-19 03:55:08.932603 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 03:55:08.932616 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 03:55:08.932633 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-19 03:55:08.932649 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-19 03:55:08.932666 | orchestrator | + echo 2026-03-19 03:55:08.932682 | orchestrator | + echo '# PULL IMAGES' 2026-03-19 03:55:08.932698 | orchestrator | + echo 2026-03-19 03:55:08.933621 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-19 03:55:08.992353 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 03:55:08.992450 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-19 03:55:11.144851 | orchestrator | 2026-03-19 03:55:11 | INFO  | Trying to run play pull-images in environment custom 2026-03-19 03:55:21.268331 | orchestrator | 2026-03-19 03:55:21 | INFO  | Task 07975a53-9551-4575-a7bc-739dc84cf3cd (pull-images) was prepared for execution. 2026-03-19 03:55:21.268427 | orchestrator | 2026-03-19 03:55:21 | INFO  | Task 07975a53-9551-4575-a7bc-739dc84cf3cd is running in background. No more output. Check ARA for logs. 2026-03-19 03:55:21.621455 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-19 03:55:21.630668 | orchestrator | + set -e 2026-03-19 03:55:21.630922 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 03:55:21.631019 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 03:55:21.631034 | orchestrator | ++ INTERACTIVE=false 2026-03-19 03:55:21.631045 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 03:55:21.631056 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 03:55:21.631068 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-19 03:55:21.633063 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-19 03:55:21.641122 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:55:21.641207 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-19 03:55:21.641588 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-19 03:55:21.695543 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-19 03:55:21.695662 | orchestrator | + osism apply frr 2026-03-19 03:55:33.886325 | orchestrator | 2026-03-19 03:55:33 | INFO  | Task a835ea05-13db-4e60-9915-b70bba65df50 (frr) was prepared for execution. 2026-03-19 03:55:33.886450 | orchestrator | 2026-03-19 03:55:33 | INFO  | It takes a moment until task a835ea05-13db-4e60-9915-b70bba65df50 (frr) has been started and output is visible here. 2026-03-19 03:55:57.394131 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-19 03:55:57.394273 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-19 03:55:57.394317 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-19 03:55:57.394334 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-19 03:55:57.394367 | orchestrator | 2026-03-19 03:55:57.394387 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-19 03:55:57.394404 | orchestrator | 2026-03-19 03:55:57.394517 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-19 03:55:57.394537 | orchestrator | Thursday 19 March 2026 03:55:41 +0000 (0:00:02.379) 0:00:02.379 ******** 2026-03-19 03:55:57.394555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 03:55:57.394574 | orchestrator | 2026-03-19 03:55:57.394592 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-19 03:55:57.394610 | orchestrator | Thursday 19 March 2026 03:55:42 +0000 (0:00:00.778) 0:00:03.158 ******** 2026-03-19 03:55:57.394628 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.394648 | orchestrator | 2026-03-19 03:55:57.394665 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-19 03:55:57.394683 | orchestrator | Thursday 19 March 2026 03:55:43 +0000 (0:00:01.508) 0:00:04.666 ******** 2026-03-19 03:55:57.394702 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.394719 | orchestrator | 2026-03-19 03:55:57.394738 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-19 03:55:57.394757 | orchestrator | Thursday 19 March 2026 03:55:45 +0000 (0:00:01.986) 0:00:06.653 ******** 2026-03-19 03:55:57.394776 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.394795 | orchestrator | 2026-03-19 03:55:57.394813 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-19 03:55:57.394831 | orchestrator | Thursday 19 March 2026 03:55:46 +0000 (0:00:00.961) 0:00:07.615 ******** 2026-03-19 03:55:57.394887 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.394909 | orchestrator | 2026-03-19 03:55:57.394927 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-19 03:55:57.394945 | orchestrator | Thursday 19 March 2026 03:55:47 +0000 (0:00:00.935) 0:00:08.551 ******** 2026-03-19 03:55:57.394993 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.395014 | orchestrator | 2026-03-19 03:55:57.395033 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-19 03:55:57.395051 | orchestrator | Thursday 19 March 2026 03:55:48 +0000 (0:00:01.443) 0:00:09.994 ******** 2026-03-19 03:55:57.395069 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:55:57.395088 | orchestrator | 2026-03-19 03:55:57.395179 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-19 03:55:57.395197 | orchestrator | Thursday 19 March 2026 03:55:49 +0000 (0:00:00.161) 0:00:10.155 ******** 2026-03-19 03:55:57.395209 | orchestrator | skipping: [testbed-manager] 2026-03-19 03:55:57.395219 | orchestrator | 2026-03-19 03:55:57.395230 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-19 03:55:57.395241 | orchestrator | Thursday 19 March 2026 03:55:49 +0000 (0:00:00.189) 0:00:10.345 ******** 2026-03-19 03:55:57.395252 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.395263 | orchestrator | 2026-03-19 03:55:57.395274 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-19 03:55:57.395284 | orchestrator | Thursday 19 March 2026 03:55:51 +0000 (0:00:02.007) 0:00:12.353 ******** 2026-03-19 03:55:57.395295 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-19 03:55:57.395327 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-19 03:55:57.395339 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-19 03:55:57.395351 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-19 03:55:57.395362 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-19 03:55:57.395373 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-19 03:55:57.395384 | orchestrator | 2026-03-19 03:55:57.395395 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-19 03:55:57.395405 | orchestrator | Thursday 19 March 2026 03:55:55 +0000 (0:00:03.795) 0:00:16.148 ******** 2026-03-19 03:55:57.395416 | orchestrator | ok: [testbed-manager] 2026-03-19 03:55:57.395427 | orchestrator | 2026-03-19 03:55:57.395437 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 03:55:57.395448 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 03:55:57.395460 | orchestrator | 2026-03-19 03:55:57.395471 | orchestrator | 2026-03-19 03:55:57.395551 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 03:55:57.395564 | orchestrator | Thursday 19 March 2026 03:55:57 +0000 (0:00:01.856) 0:00:18.004 ******** 2026-03-19 03:55:57.395576 | orchestrator | =============================================================================== 2026-03-19 03:55:57.395588 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.80s 2026-03-19 03:55:57.395601 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.01s 2026-03-19 03:55:57.395636 | orchestrator | osism.services.frr : Install frr package -------------------------------- 1.99s 2026-03-19 03:55:57.395649 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.86s 2026-03-19 03:55:57.395661 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.51s 2026-03-19 03:55:57.395674 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.44s 2026-03-19 03:55:57.395686 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-03-19 03:55:57.395715 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-03-19 03:55:57.395727 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.78s 2026-03-19 03:55:57.395738 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-03-19 03:55:57.395749 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-19 03:55:57.712366 | orchestrator | + osism apply kubernetes 2026-03-19 03:55:59.937147 | orchestrator | 2026-03-19 03:55:59 | INFO  | Task f15f59bc-8e2a-4713-8c5b-1c45fa223600 (kubernetes) was prepared for execution. 2026-03-19 03:55:59.937234 | orchestrator | 2026-03-19 03:55:59 | INFO  | It takes a moment until task f15f59bc-8e2a-4713-8c5b-1c45fa223600 (kubernetes) has been started and output is visible here. 2026-03-19 03:56:46.185523 | orchestrator | 2026-03-19 03:56:46.185654 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-19 03:56:46.185680 | orchestrator | 2026-03-19 03:56:46.185697 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-19 03:56:46.185716 | orchestrator | Thursday 19 March 2026 03:56:06 +0000 (0:00:01.756) 0:00:01.756 ******** 2026-03-19 03:56:46.185732 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.185747 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.185761 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.185776 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.185791 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.185806 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.185821 | orchestrator | 2026-03-19 03:56:46.185836 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-19 03:56:46.185851 | orchestrator | Thursday 19 March 2026 03:56:11 +0000 (0:00:04.805) 0:00:06.561 ******** 2026-03-19 03:56:46.185865 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.185880 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.185895 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.185910 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.185925 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.185940 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.185955 | orchestrator | 2026-03-19 03:56:46.185969 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-19 03:56:46.185985 | orchestrator | Thursday 19 March 2026 03:56:13 +0000 (0:00:02.385) 0:00:08.946 ******** 2026-03-19 03:56:46.186083 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.186108 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.186124 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.186139 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.186154 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.186168 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.186184 | orchestrator | 2026-03-19 03:56:46.186200 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-19 03:56:46.186217 | orchestrator | Thursday 19 March 2026 03:56:16 +0000 (0:00:02.641) 0:00:11.588 ******** 2026-03-19 03:56:46.186232 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.186248 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.186262 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.186277 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.186292 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.186307 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.186323 | orchestrator | 2026-03-19 03:56:46.186339 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-19 03:56:46.186355 | orchestrator | Thursday 19 March 2026 03:56:19 +0000 (0:00:03.167) 0:00:14.755 ******** 2026-03-19 03:56:46.186370 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.186385 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.186401 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.186415 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.186461 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.186476 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.186491 | orchestrator | 2026-03-19 03:56:46.186506 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-19 03:56:46.186520 | orchestrator | Thursday 19 March 2026 03:56:22 +0000 (0:00:03.372) 0:00:18.127 ******** 2026-03-19 03:56:46.186534 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.186548 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.186562 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.186576 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.186589 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.186603 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.186617 | orchestrator | 2026-03-19 03:56:46.186631 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-19 03:56:46.186644 | orchestrator | Thursday 19 March 2026 03:56:24 +0000 (0:00:02.105) 0:00:20.233 ******** 2026-03-19 03:56:46.186658 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.186672 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.186686 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.186699 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.186713 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.186727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.186741 | orchestrator | 2026-03-19 03:56:46.186755 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-19 03:56:46.186770 | orchestrator | Thursday 19 March 2026 03:56:26 +0000 (0:00:01.996) 0:00:22.230 ******** 2026-03-19 03:56:46.186784 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.186798 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.186811 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.186826 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.186856 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.186870 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.186885 | orchestrator | 2026-03-19 03:56:46.186899 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-19 03:56:46.186913 | orchestrator | Thursday 19 March 2026 03:56:28 +0000 (0:00:01.989) 0:00:24.219 ******** 2026-03-19 03:56:46.186928 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.186943 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.186957 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.186973 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.186988 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.187030 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.187046 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.187060 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.187074 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.187089 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.187103 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.187117 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.187161 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.187178 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.187193 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.187208 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-19 03:56:46.187221 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-19 03:56:46.187235 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.187249 | orchestrator | 2026-03-19 03:56:46.187281 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-19 03:56:46.187297 | orchestrator | Thursday 19 March 2026 03:56:30 +0000 (0:00:01.965) 0:00:26.185 ******** 2026-03-19 03:56:46.187312 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.187326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.187340 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.187355 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.187369 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.187383 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.187397 | orchestrator | 2026-03-19 03:56:46.187412 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-19 03:56:46.187427 | orchestrator | Thursday 19 March 2026 03:56:32 +0000 (0:00:02.118) 0:00:28.304 ******** 2026-03-19 03:56:46.187442 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.187458 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.187472 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.187486 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.187500 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.187514 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.187528 | orchestrator | 2026-03-19 03:56:46.187542 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-19 03:56:46.187555 | orchestrator | Thursday 19 March 2026 03:56:34 +0000 (0:00:01.912) 0:00:30.217 ******** 2026-03-19 03:56:46.187569 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:56:46.187583 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:56:46.187598 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:56:46.187612 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:56:46.187626 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:56:46.187641 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:56:46.187655 | orchestrator | 2026-03-19 03:56:46.187670 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-19 03:56:46.187686 | orchestrator | Thursday 19 March 2026 03:56:37 +0000 (0:00:02.839) 0:00:33.057 ******** 2026-03-19 03:56:46.187700 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.187715 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.187726 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.187736 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.187751 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.187765 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.187779 | orchestrator | 2026-03-19 03:56:46.187793 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-19 03:56:46.187808 | orchestrator | Thursday 19 March 2026 03:56:39 +0000 (0:00:01.923) 0:00:34.980 ******** 2026-03-19 03:56:46.187822 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.187836 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.187851 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.187865 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.187881 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.187895 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.187910 | orchestrator | 2026-03-19 03:56:46.187926 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-19 03:56:46.187941 | orchestrator | Thursday 19 March 2026 03:56:41 +0000 (0:00:02.230) 0:00:37.211 ******** 2026-03-19 03:56:46.187953 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.187967 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.187975 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.187984 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.187993 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.188068 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.188080 | orchestrator | 2026-03-19 03:56:46.188089 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-19 03:56:46.188098 | orchestrator | Thursday 19 March 2026 03:56:43 +0000 (0:00:01.842) 0:00:39.053 ******** 2026-03-19 03:56:46.188119 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-19 03:56:46.188128 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-19 03:56:46.188137 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.188146 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-19 03:56:46.188155 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-19 03:56:46.188163 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.188172 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-19 03:56:46.188181 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-19 03:56:46.188189 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:56:46.188198 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-19 03:56:46.188207 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-19 03:56:46.188215 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:56:46.188224 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-19 03:56:46.188233 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-19 03:56:46.188242 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:56:46.188250 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-19 03:56:46.188259 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-19 03:56:46.188268 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:56:46.188276 | orchestrator | 2026-03-19 03:56:46.188289 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-19 03:56:46.188304 | orchestrator | Thursday 19 March 2026 03:56:45 +0000 (0:00:02.164) 0:00:41.218 ******** 2026-03-19 03:56:46.188318 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:56:46.188332 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:56:46.188360 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:58:51.533534 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.533639 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.533650 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.533659 | orchestrator | 2026-03-19 03:58:51.533669 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-19 03:58:51.533679 | orchestrator | Thursday 19 March 2026 03:56:47 +0000 (0:00:01.729) 0:00:42.948 ******** 2026-03-19 03:58:51.533688 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:58:51.533696 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:58:51.533704 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:58:51.533712 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.533719 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.533727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.533735 | orchestrator | 2026-03-19 03:58:51.533743 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-19 03:58:51.533751 | orchestrator | 2026-03-19 03:58:51.533759 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-19 03:58:51.533769 | orchestrator | Thursday 19 March 2026 03:56:50 +0000 (0:00:02.692) 0:00:45.641 ******** 2026-03-19 03:58:51.533778 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.533787 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.533811 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.533819 | orchestrator | 2026-03-19 03:58:51.533831 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-19 03:58:51.533839 | orchestrator | Thursday 19 March 2026 03:56:51 +0000 (0:00:01.799) 0:00:47.441 ******** 2026-03-19 03:58:51.533847 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.533855 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.533864 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.533872 | orchestrator | 2026-03-19 03:58:51.533880 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-19 03:58:51.533888 | orchestrator | Thursday 19 March 2026 03:56:54 +0000 (0:00:02.137) 0:00:49.578 ******** 2026-03-19 03:58:51.533917 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.533926 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:58:51.533934 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:58:51.533942 | orchestrator | 2026-03-19 03:58:51.533950 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-19 03:58:51.533958 | orchestrator | Thursday 19 March 2026 03:56:56 +0000 (0:00:02.180) 0:00:51.758 ******** 2026-03-19 03:58:51.533966 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.533974 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.533982 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.533989 | orchestrator | 2026-03-19 03:58:51.533997 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-19 03:58:51.534005 | orchestrator | Thursday 19 March 2026 03:56:58 +0000 (0:00:01.951) 0:00:53.710 ******** 2026-03-19 03:58:51.534013 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.534070 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534079 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534087 | orchestrator | 2026-03-19 03:58:51.534096 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-19 03:58:51.534104 | orchestrator | Thursday 19 March 2026 03:56:59 +0000 (0:00:01.423) 0:00:55.134 ******** 2026-03-19 03:58:51.534113 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534122 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534131 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534140 | orchestrator | 2026-03-19 03:58:51.534149 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-19 03:58:51.534157 | orchestrator | Thursday 19 March 2026 03:57:01 +0000 (0:00:01.766) 0:00:56.900 ******** 2026-03-19 03:58:51.534166 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534174 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534182 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534190 | orchestrator | 2026-03-19 03:58:51.534199 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-19 03:58:51.534207 | orchestrator | Thursday 19 March 2026 03:57:03 +0000 (0:00:02.210) 0:00:59.111 ******** 2026-03-19 03:58:51.534216 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 03:58:51.534225 | orchestrator | 2026-03-19 03:58:51.534233 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-19 03:58:51.534279 | orchestrator | Thursday 19 March 2026 03:57:05 +0000 (0:00:01.966) 0:01:01.078 ******** 2026-03-19 03:58:51.534289 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534297 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534306 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534314 | orchestrator | 2026-03-19 03:58:51.534322 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-19 03:58:51.534330 | orchestrator | Thursday 19 March 2026 03:57:08 +0000 (0:00:02.511) 0:01:03.590 ******** 2026-03-19 03:58:51.534338 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534346 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534354 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534363 | orchestrator | 2026-03-19 03:58:51.534370 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-19 03:58:51.534378 | orchestrator | Thursday 19 March 2026 03:57:09 +0000 (0:00:01.639) 0:01:05.229 ******** 2026-03-19 03:58:51.534386 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534395 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534403 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.534412 | orchestrator | 2026-03-19 03:58:51.534420 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-19 03:58:51.534428 | orchestrator | Thursday 19 March 2026 03:57:11 +0000 (0:00:01.863) 0:01:07.093 ******** 2026-03-19 03:58:51.534436 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534444 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534452 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.534467 | orchestrator | 2026-03-19 03:58:51.534475 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-19 03:58:51.534483 | orchestrator | Thursday 19 March 2026 03:57:14 +0000 (0:00:02.504) 0:01:09.598 ******** 2026-03-19 03:58:51.534491 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.534500 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534523 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534531 | orchestrator | 2026-03-19 03:58:51.534539 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-19 03:58:51.534547 | orchestrator | Thursday 19 March 2026 03:57:15 +0000 (0:00:01.414) 0:01:11.012 ******** 2026-03-19 03:58:51.534555 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.534562 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534570 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534578 | orchestrator | 2026-03-19 03:58:51.534586 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-19 03:58:51.534594 | orchestrator | Thursday 19 March 2026 03:57:17 +0000 (0:00:01.573) 0:01:12.586 ******** 2026-03-19 03:58:51.534602 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.534610 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:58:51.534618 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:58:51.534626 | orchestrator | 2026-03-19 03:58:51.534634 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-19 03:58:51.534641 | orchestrator | Thursday 19 March 2026 03:57:19 +0000 (0:00:02.158) 0:01:14.745 ******** 2026-03-19 03:58:51.534649 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534656 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534664 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534671 | orchestrator | 2026-03-19 03:58:51.534679 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-19 03:58:51.534687 | orchestrator | Thursday 19 March 2026 03:57:21 +0000 (0:00:01.911) 0:01:16.657 ******** 2026-03-19 03:58:51.534695 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534703 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534711 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534718 | orchestrator | 2026-03-19 03:58:51.534726 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-19 03:58:51.534734 | orchestrator | Thursday 19 March 2026 03:57:22 +0000 (0:00:01.465) 0:01:18.122 ******** 2026-03-19 03:58:51.534742 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 03:58:51.534752 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 03:58:51.534759 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-19 03:58:51.534766 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 03:58:51.534775 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 03:58:51.534783 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-19 03:58:51.534791 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534798 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534806 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534814 | orchestrator | 2026-03-19 03:58:51.534822 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-19 03:58:51.534830 | orchestrator | Thursday 19 March 2026 03:57:46 +0000 (0:00:23.348) 0:01:41.471 ******** 2026-03-19 03:58:51.534837 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:58:51.534845 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:58:51.534858 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:58:51.534867 | orchestrator | 2026-03-19 03:58:51.534875 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-19 03:58:51.534883 | orchestrator | Thursday 19 March 2026 03:57:47 +0000 (0:00:01.367) 0:01:42.838 ******** 2026-03-19 03:58:51.534891 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.534899 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:58:51.534906 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:58:51.534914 | orchestrator | 2026-03-19 03:58:51.534922 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-19 03:58:51.534930 | orchestrator | Thursday 19 March 2026 03:57:49 +0000 (0:00:02.103) 0:01:44.941 ******** 2026-03-19 03:58:51.534937 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.534945 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.534953 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.534962 | orchestrator | 2026-03-19 03:58:51.534969 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-19 03:58:51.534977 | orchestrator | Thursday 19 March 2026 03:57:51 +0000 (0:00:02.234) 0:01:47.176 ******** 2026-03-19 03:58:51.534985 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:58:51.534993 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:58:51.535001 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.535009 | orchestrator | 2026-03-19 03:58:51.535016 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-19 03:58:51.535024 | orchestrator | Thursday 19 March 2026 03:58:46 +0000 (0:00:54.462) 0:02:41.639 ******** 2026-03-19 03:58:51.535032 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.535040 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.535048 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.535056 | orchestrator | 2026-03-19 03:58:51.535069 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-19 03:58:51.535077 | orchestrator | Thursday 19 March 2026 03:58:47 +0000 (0:00:01.768) 0:02:43.407 ******** 2026-03-19 03:58:51.535085 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:58:51.535093 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:58:51.535101 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:58:51.535109 | orchestrator | 2026-03-19 03:58:51.535116 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-19 03:58:51.535124 | orchestrator | Thursday 19 March 2026 03:58:49 +0000 (0:00:01.679) 0:02:45.087 ******** 2026-03-19 03:58:51.535133 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:58:51.535140 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:58:51.535148 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:58:51.535156 | orchestrator | 2026-03-19 03:58:51.535169 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-19 03:59:40.632569 | orchestrator | Thursday 19 March 2026 03:58:51 +0000 (0:00:01.891) 0:02:46.978 ******** 2026-03-19 03:59:40.632711 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:59:40.632733 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:59:40.632751 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:59:40.632768 | orchestrator | 2026-03-19 03:59:40.632787 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-19 03:59:40.632804 | orchestrator | Thursday 19 March 2026 03:58:53 +0000 (0:00:01.662) 0:02:48.642 ******** 2026-03-19 03:59:40.632822 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:59:40.632841 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:59:40.632858 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:59:40.632895 | orchestrator | 2026-03-19 03:59:40.632913 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-19 03:59:40.632930 | orchestrator | Thursday 19 March 2026 03:58:54 +0000 (0:00:01.337) 0:02:49.979 ******** 2026-03-19 03:59:40.632947 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:59:40.632967 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:59:40.632986 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:59:40.633004 | orchestrator | 2026-03-19 03:59:40.633021 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-19 03:59:40.633070 | orchestrator | Thursday 19 March 2026 03:58:56 +0000 (0:00:01.768) 0:02:51.748 ******** 2026-03-19 03:59:40.633105 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:59:40.633122 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:59:40.633139 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:59:40.633156 | orchestrator | 2026-03-19 03:59:40.633172 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-19 03:59:40.633189 | orchestrator | Thursday 19 March 2026 03:58:58 +0000 (0:00:01.997) 0:02:53.746 ******** 2026-03-19 03:59:40.633205 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:59:40.633223 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:59:40.633239 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:59:40.633256 | orchestrator | 2026-03-19 03:59:40.633274 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-19 03:59:40.633292 | orchestrator | Thursday 19 March 2026 03:59:00 +0000 (0:00:01.794) 0:02:55.540 ******** 2026-03-19 03:59:40.633308 | orchestrator | changed: [testbed-node-0] 2026-03-19 03:59:40.633324 | orchestrator | changed: [testbed-node-1] 2026-03-19 03:59:40.633369 | orchestrator | changed: [testbed-node-2] 2026-03-19 03:59:40.633387 | orchestrator | 2026-03-19 03:59:40.633405 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-19 03:59:40.633423 | orchestrator | Thursday 19 March 2026 03:59:02 +0000 (0:00:01.931) 0:02:57.472 ******** 2026-03-19 03:59:40.633441 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:59:40.633459 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:59:40.633477 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:59:40.633496 | orchestrator | 2026-03-19 03:59:40.633516 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-19 03:59:40.633534 | orchestrator | Thursday 19 March 2026 03:59:03 +0000 (0:00:01.381) 0:02:58.853 ******** 2026-03-19 03:59:40.633551 | orchestrator | skipping: [testbed-node-0] 2026-03-19 03:59:40.633569 | orchestrator | skipping: [testbed-node-1] 2026-03-19 03:59:40.633587 | orchestrator | skipping: [testbed-node-2] 2026-03-19 03:59:40.633604 | orchestrator | 2026-03-19 03:59:40.633621 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-19 03:59:40.633638 | orchestrator | Thursday 19 March 2026 03:59:04 +0000 (0:00:01.392) 0:03:00.245 ******** 2026-03-19 03:59:40.633656 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:59:40.633675 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:59:40.633694 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:59:40.633713 | orchestrator | 2026-03-19 03:59:40.633731 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-19 03:59:40.633750 | orchestrator | Thursday 19 March 2026 03:59:06 +0000 (0:00:01.798) 0:03:02.044 ******** 2026-03-19 03:59:40.633769 | orchestrator | ok: [testbed-node-0] 2026-03-19 03:59:40.633788 | orchestrator | ok: [testbed-node-1] 2026-03-19 03:59:40.633807 | orchestrator | ok: [testbed-node-2] 2026-03-19 03:59:40.633821 | orchestrator | 2026-03-19 03:59:40.633833 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-19 03:59:40.633845 | orchestrator | Thursday 19 March 2026 03:59:08 +0000 (0:00:01.624) 0:03:03.669 ******** 2026-03-19 03:59:40.633856 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 03:59:40.633868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 03:59:40.633879 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 03:59:40.633889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 03:59:40.633900 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-19 03:59:40.633911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 03:59:40.633937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-19 03:59:40.633948 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 03:59:40.633959 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-19 03:59:40.633969 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-19 03:59:40.633981 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 03:59:40.633992 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 03:59:40.634109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 03:59:40.634135 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-19 03:59:40.634153 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 03:59:40.634171 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 03:59:40.634188 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-19 03:59:40.634205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-19 03:59:40.634223 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 03:59:40.634240 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-19 03:59:40.634258 | orchestrator | 2026-03-19 03:59:40.634277 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-19 03:59:40.634296 | orchestrator | 2026-03-19 03:59:40.634314 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-19 03:59:40.634373 | orchestrator | Thursday 19 March 2026 03:59:12 +0000 (0:00:04.479) 0:03:08.149 ******** 2026-03-19 03:59:40.634393 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.634413 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.634425 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.634436 | orchestrator | 2026-03-19 03:59:40.634447 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-19 03:59:40.634458 | orchestrator | Thursday 19 March 2026 03:59:14 +0000 (0:00:01.398) 0:03:09.547 ******** 2026-03-19 03:59:40.634469 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.634480 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.634490 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.634501 | orchestrator | 2026-03-19 03:59:40.634512 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-19 03:59:40.634523 | orchestrator | Thursday 19 March 2026 03:59:15 +0000 (0:00:01.737) 0:03:11.285 ******** 2026-03-19 03:59:40.634534 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.634544 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.634555 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.634566 | orchestrator | 2026-03-19 03:59:40.634576 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-19 03:59:40.634587 | orchestrator | Thursday 19 March 2026 03:59:17 +0000 (0:00:01.690) 0:03:12.975 ******** 2026-03-19 03:59:40.634598 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 03:59:40.634609 | orchestrator | 2026-03-19 03:59:40.634620 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-19 03:59:40.634631 | orchestrator | Thursday 19 March 2026 03:59:19 +0000 (0:00:01.735) 0:03:14.711 ******** 2026-03-19 03:59:40.634642 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:59:40.634653 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:59:40.634664 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:59:40.634686 | orchestrator | 2026-03-19 03:59:40.634697 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-19 03:59:40.634708 | orchestrator | Thursday 19 March 2026 03:59:20 +0000 (0:00:01.382) 0:03:16.094 ******** 2026-03-19 03:59:40.634719 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:59:40.634730 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:59:40.634740 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:59:40.634751 | orchestrator | 2026-03-19 03:59:40.634762 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-19 03:59:40.634773 | orchestrator | Thursday 19 March 2026 03:59:22 +0000 (0:00:01.425) 0:03:17.519 ******** 2026-03-19 03:59:40.634783 | orchestrator | skipping: [testbed-node-3] 2026-03-19 03:59:40.634794 | orchestrator | skipping: [testbed-node-4] 2026-03-19 03:59:40.634805 | orchestrator | skipping: [testbed-node-5] 2026-03-19 03:59:40.634816 | orchestrator | 2026-03-19 03:59:40.634827 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-19 03:59:40.634837 | orchestrator | Thursday 19 March 2026 03:59:23 +0000 (0:00:01.349) 0:03:18.869 ******** 2026-03-19 03:59:40.634848 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.634859 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.634870 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.634881 | orchestrator | 2026-03-19 03:59:40.634892 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-19 03:59:40.634914 | orchestrator | Thursday 19 March 2026 03:59:25 +0000 (0:00:01.760) 0:03:20.629 ******** 2026-03-19 03:59:40.634926 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.634937 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.634947 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.634958 | orchestrator | 2026-03-19 03:59:40.634969 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-19 03:59:40.634980 | orchestrator | Thursday 19 March 2026 03:59:27 +0000 (0:00:02.495) 0:03:23.125 ******** 2026-03-19 03:59:40.634991 | orchestrator | ok: [testbed-node-3] 2026-03-19 03:59:40.635002 | orchestrator | ok: [testbed-node-4] 2026-03-19 03:59:40.635012 | orchestrator | ok: [testbed-node-5] 2026-03-19 03:59:40.635023 | orchestrator | 2026-03-19 03:59:40.635034 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-19 03:59:40.635045 | orchestrator | Thursday 19 March 2026 03:59:30 +0000 (0:00:02.471) 0:03:25.596 ******** 2026-03-19 03:59:40.635056 | orchestrator | changed: [testbed-node-3] 2026-03-19 03:59:40.635067 | orchestrator | changed: [testbed-node-5] 2026-03-19 03:59:40.635078 | orchestrator | changed: [testbed-node-4] 2026-03-19 03:59:40.635088 | orchestrator | 2026-03-19 03:59:40.635099 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-19 03:59:40.635110 | orchestrator | 2026-03-19 03:59:40.635121 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-19 03:59:40.635131 | orchestrator | Thursday 19 March 2026 03:59:38 +0000 (0:00:08.314) 0:03:33.911 ******** 2026-03-19 03:59:40.635142 | orchestrator | ok: [testbed-manager] 2026-03-19 03:59:40.635153 | orchestrator | 2026-03-19 03:59:40.635164 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-19 03:59:40.635185 | orchestrator | Thursday 19 March 2026 03:59:40 +0000 (0:00:02.165) 0:03:36.077 ******** 2026-03-19 04:00:50.887570 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.887675 | orchestrator | 2026-03-19 04:00:50.887688 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-19 04:00:50.887698 | orchestrator | Thursday 19 March 2026 03:59:42 +0000 (0:00:01.477) 0:03:37.555 ******** 2026-03-19 04:00:50.887707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 04:00:50.887716 | orchestrator | 2026-03-19 04:00:50.887724 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-19 04:00:50.887732 | orchestrator | Thursday 19 March 2026 03:59:43 +0000 (0:00:01.632) 0:03:39.188 ******** 2026-03-19 04:00:50.887741 | orchestrator | changed: [testbed-manager] 2026-03-19 04:00:50.887773 | orchestrator | 2026-03-19 04:00:50.887786 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-19 04:00:50.887800 | orchestrator | Thursday 19 March 2026 03:59:45 +0000 (0:00:01.970) 0:03:41.158 ******** 2026-03-19 04:00:50.887814 | orchestrator | changed: [testbed-manager] 2026-03-19 04:00:50.887828 | orchestrator | 2026-03-19 04:00:50.887841 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-19 04:00:50.887869 | orchestrator | Thursday 19 March 2026 03:59:47 +0000 (0:00:01.719) 0:03:42.878 ******** 2026-03-19 04:00:50.887884 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 04:00:50.887898 | orchestrator | 2026-03-19 04:00:50.887912 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-19 04:00:50.887924 | orchestrator | Thursday 19 March 2026 03:59:50 +0000 (0:00:02.939) 0:03:45.818 ******** 2026-03-19 04:00:50.887932 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-19 04:00:50.887940 | orchestrator | 2026-03-19 04:00:50.887952 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-19 04:00:50.887966 | orchestrator | Thursday 19 March 2026 03:59:52 +0000 (0:00:01.860) 0:03:47.678 ******** 2026-03-19 04:00:50.887979 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.887992 | orchestrator | 2026-03-19 04:00:50.888006 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-19 04:00:50.888020 | orchestrator | Thursday 19 March 2026 03:59:53 +0000 (0:00:01.419) 0:03:49.098 ******** 2026-03-19 04:00:50.888035 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888043 | orchestrator | 2026-03-19 04:00:50.888051 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-19 04:00:50.888060 | orchestrator | 2026-03-19 04:00:50.888074 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-19 04:00:50.888087 | orchestrator | Thursday 19 March 2026 03:59:55 +0000 (0:00:01.583) 0:03:50.682 ******** 2026-03-19 04:00:50.888101 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888115 | orchestrator | 2026-03-19 04:00:50.888129 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-19 04:00:50.888143 | orchestrator | Thursday 19 March 2026 03:59:56 +0000 (0:00:01.192) 0:03:51.874 ******** 2026-03-19 04:00:50.888158 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 04:00:50.888169 | orchestrator | 2026-03-19 04:00:50.888183 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-19 04:00:50.888196 | orchestrator | Thursday 19 March 2026 03:59:57 +0000 (0:00:01.464) 0:03:53.338 ******** 2026-03-19 04:00:50.888210 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888224 | orchestrator | 2026-03-19 04:00:50.888237 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-19 04:00:50.888250 | orchestrator | Thursday 19 March 2026 03:59:59 +0000 (0:00:01.836) 0:03:55.175 ******** 2026-03-19 04:00:50.888263 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888276 | orchestrator | 2026-03-19 04:00:50.888287 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-19 04:00:50.888295 | orchestrator | Thursday 19 March 2026 04:00:02 +0000 (0:00:02.811) 0:03:57.986 ******** 2026-03-19 04:00:50.888303 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888311 | orchestrator | 2026-03-19 04:00:50.888319 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-19 04:00:50.888326 | orchestrator | Thursday 19 March 2026 04:00:04 +0000 (0:00:01.498) 0:03:59.485 ******** 2026-03-19 04:00:50.888334 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888342 | orchestrator | 2026-03-19 04:00:50.888350 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-19 04:00:50.888358 | orchestrator | Thursday 19 March 2026 04:00:05 +0000 (0:00:01.481) 0:04:00.967 ******** 2026-03-19 04:00:50.888366 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888374 | orchestrator | 2026-03-19 04:00:50.888382 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-19 04:00:50.888400 | orchestrator | Thursday 19 March 2026 04:00:07 +0000 (0:00:01.718) 0:04:02.685 ******** 2026-03-19 04:00:50.888408 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888416 | orchestrator | 2026-03-19 04:00:50.888423 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-19 04:00:50.888432 | orchestrator | Thursday 19 March 2026 04:00:09 +0000 (0:00:02.665) 0:04:05.350 ******** 2026-03-19 04:00:50.888440 | orchestrator | ok: [testbed-manager] 2026-03-19 04:00:50.888448 | orchestrator | 2026-03-19 04:00:50.888455 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-19 04:00:50.888496 | orchestrator | 2026-03-19 04:00:50.888504 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-19 04:00:50.888512 | orchestrator | Thursday 19 March 2026 04:00:11 +0000 (0:00:01.703) 0:04:07.054 ******** 2026-03-19 04:00:50.888520 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:00:50.888528 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:00:50.888536 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:00:50.888544 | orchestrator | 2026-03-19 04:00:50.888552 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-19 04:00:50.888560 | orchestrator | Thursday 19 March 2026 04:00:13 +0000 (0:00:01.479) 0:04:08.534 ******** 2026-03-19 04:00:50.888567 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:00:50.888575 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:00:50.888583 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:00:50.888591 | orchestrator | 2026-03-19 04:00:50.888615 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-19 04:00:50.888624 | orchestrator | Thursday 19 March 2026 04:00:14 +0000 (0:00:01.557) 0:04:10.092 ******** 2026-03-19 04:00:50.888632 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:00:50.888640 | orchestrator | 2026-03-19 04:00:50.888648 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-19 04:00:50.888656 | orchestrator | Thursday 19 March 2026 04:00:16 +0000 (0:00:01.742) 0:04:11.834 ******** 2026-03-19 04:00:50.888664 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888672 | orchestrator | 2026-03-19 04:00:50.888680 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-19 04:00:50.888688 | orchestrator | Thursday 19 March 2026 04:00:18 +0000 (0:00:01.889) 0:04:13.724 ******** 2026-03-19 04:00:50.888696 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888708 | orchestrator | 2026-03-19 04:00:50.888722 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-19 04:00:50.888734 | orchestrator | Thursday 19 March 2026 04:00:20 +0000 (0:00:01.861) 0:04:15.585 ******** 2026-03-19 04:00:50.888747 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:00:50.888760 | orchestrator | 2026-03-19 04:00:50.888771 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-19 04:00:50.888783 | orchestrator | Thursday 19 March 2026 04:00:21 +0000 (0:00:01.202) 0:04:16.787 ******** 2026-03-19 04:00:50.888795 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888808 | orchestrator | 2026-03-19 04:00:50.888821 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-19 04:00:50.888834 | orchestrator | Thursday 19 March 2026 04:00:23 +0000 (0:00:01.997) 0:04:18.785 ******** 2026-03-19 04:00:50.888848 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888861 | orchestrator | 2026-03-19 04:00:50.888875 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-19 04:00:50.888889 | orchestrator | Thursday 19 March 2026 04:00:25 +0000 (0:00:02.263) 0:04:21.048 ******** 2026-03-19 04:00:50.888903 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888916 | orchestrator | 2026-03-19 04:00:50.888928 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-19 04:00:50.888936 | orchestrator | Thursday 19 March 2026 04:00:26 +0000 (0:00:01.147) 0:04:22.196 ******** 2026-03-19 04:00:50.888952 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.888960 | orchestrator | 2026-03-19 04:00:50.888968 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-19 04:00:50.888977 | orchestrator | Thursday 19 March 2026 04:00:27 +0000 (0:00:01.208) 0:04:23.404 ******** 2026-03-19 04:00:50.888985 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-19 04:00:50.888993 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-19 04:00:50.889002 | orchestrator | } 2026-03-19 04:00:50.889010 | orchestrator | 2026-03-19 04:00:50.889018 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-19 04:00:50.889026 | orchestrator | Thursday 19 March 2026 04:00:29 +0000 (0:00:01.216) 0:04:24.621 ******** 2026-03-19 04:00:50.889034 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:00:50.889042 | orchestrator | 2026-03-19 04:00:50.889049 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-19 04:00:50.889057 | orchestrator | Thursday 19 March 2026 04:00:30 +0000 (0:00:01.214) 0:04:25.836 ******** 2026-03-19 04:00:50.889065 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-19 04:00:50.889073 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-19 04:00:50.889080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-19 04:00:50.889088 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-19 04:00:50.889096 | orchestrator | 2026-03-19 04:00:50.889104 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-19 04:00:50.889112 | orchestrator | Thursday 19 March 2026 04:00:36 +0000 (0:00:05.743) 0:04:31.579 ******** 2026-03-19 04:00:50.889119 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.889127 | orchestrator | 2026-03-19 04:00:50.889135 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-19 04:00:50.889143 | orchestrator | Thursday 19 March 2026 04:00:38 +0000 (0:00:02.450) 0:04:34.030 ******** 2026-03-19 04:00:50.889151 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.889159 | orchestrator | 2026-03-19 04:00:50.889166 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-19 04:00:50.889174 | orchestrator | Thursday 19 March 2026 04:00:41 +0000 (0:00:02.597) 0:04:36.627 ******** 2026-03-19 04:00:50.889182 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-19 04:00:50.889190 | orchestrator | 2026-03-19 04:00:50.889198 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-19 04:00:50.889215 | orchestrator | Thursday 19 March 2026 04:00:45 +0000 (0:00:04.287) 0:04:40.915 ******** 2026-03-19 04:00:50.889223 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:00:50.889231 | orchestrator | 2026-03-19 04:00:50.889239 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-19 04:00:50.889247 | orchestrator | Thursday 19 March 2026 04:00:46 +0000 (0:00:01.122) 0:04:42.037 ******** 2026-03-19 04:00:50.889254 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-19 04:00:50.889262 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-19 04:00:50.889270 | orchestrator | 2026-03-19 04:00:50.889278 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-19 04:00:50.889286 | orchestrator | Thursday 19 March 2026 04:00:49 +0000 (0:00:02.867) 0:04:44.905 ******** 2026-03-19 04:00:50.889293 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:00:50.889309 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:01:17.323170 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:01:17.323288 | orchestrator | 2026-03-19 04:01:17.323309 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-19 04:01:17.323325 | orchestrator | Thursday 19 March 2026 04:00:50 +0000 (0:00:01.429) 0:04:46.335 ******** 2026-03-19 04:01:17.323360 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:01:17.323376 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:01:17.323389 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:01:17.323403 | orchestrator | 2026-03-19 04:01:17.323417 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-19 04:01:17.323429 | orchestrator | 2026-03-19 04:01:17.323442 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-19 04:01:17.323456 | orchestrator | Thursday 19 March 2026 04:00:53 +0000 (0:00:02.318) 0:04:48.653 ******** 2026-03-19 04:01:17.323468 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:17.323480 | orchestrator | 2026-03-19 04:01:17.323492 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-19 04:01:17.323503 | orchestrator | Thursday 19 March 2026 04:00:54 +0000 (0:00:01.166) 0:04:49.820 ******** 2026-03-19 04:01:17.323560 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-19 04:01:17.323575 | orchestrator | 2026-03-19 04:01:17.323588 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-19 04:01:17.323600 | orchestrator | Thursday 19 March 2026 04:00:55 +0000 (0:00:01.464) 0:04:51.285 ******** 2026-03-19 04:01:17.323612 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:17.323624 | orchestrator | 2026-03-19 04:01:17.323637 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-19 04:01:17.323650 | orchestrator | 2026-03-19 04:01:17.323662 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-19 04:01:17.323674 | orchestrator | Thursday 19 March 2026 04:01:01 +0000 (0:00:05.294) 0:04:56.579 ******** 2026-03-19 04:01:17.323687 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:01:17.323699 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:01:17.323712 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:01:17.323725 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:01:17.323737 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:01:17.323750 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:01:17.323763 | orchestrator | 2026-03-19 04:01:17.323776 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-19 04:01:17.323787 | orchestrator | Thursday 19 March 2026 04:01:03 +0000 (0:00:01.986) 0:04:58.566 ******** 2026-03-19 04:01:17.323800 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 04:01:17.323812 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 04:01:17.323825 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-19 04:01:17.323837 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 04:01:17.323850 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 04:01:17.323863 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-19 04:01:17.323876 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 04:01:17.323889 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 04:01:17.323902 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-19 04:01:17.323915 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 04:01:17.323929 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 04:01:17.323941 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 04:01:17.323954 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-19 04:01:17.323967 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 04:01:17.323979 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 04:01:17.324005 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-19 04:01:17.324018 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 04:01:17.324031 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-19 04:01:17.324043 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 04:01:17.324056 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 04:01:17.324069 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-19 04:01:17.324081 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 04:01:17.324094 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 04:01:17.324106 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-19 04:01:17.324118 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 04:01:17.324130 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 04:01:17.324164 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-19 04:01:17.324177 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 04:01:17.324190 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 04:01:17.324202 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-19 04:01:17.324214 | orchestrator | 2026-03-19 04:01:17.324225 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-19 04:01:17.324238 | orchestrator | Thursday 19 March 2026 04:01:12 +0000 (0:00:09.681) 0:05:08.248 ******** 2026-03-19 04:01:17.324250 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:01:17.324263 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:01:17.324276 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:01:17.324287 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:01:17.324299 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:01:17.324312 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:01:17.324324 | orchestrator | 2026-03-19 04:01:17.324336 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-19 04:01:17.324355 | orchestrator | Thursday 19 March 2026 04:01:14 +0000 (0:00:01.977) 0:05:10.225 ******** 2026-03-19 04:01:17.324368 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:01:17.324380 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:01:17.324392 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:01:17.324403 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:01:17.324415 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:01:17.324427 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:01:17.324440 | orchestrator | 2026-03-19 04:01:17.324453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:01:17.324465 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 04:01:17.324481 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 04:01:17.324492 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 04:01:17.324504 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-19 04:01:17.324564 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 04:01:17.324587 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 04:01:17.324599 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-19 04:01:17.324612 | orchestrator | 2026-03-19 04:01:17.324624 | orchestrator | 2026-03-19 04:01:17.324636 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:01:17.324648 | orchestrator | Thursday 19 March 2026 04:01:17 +0000 (0:00:02.533) 0:05:12.759 ******** 2026-03-19 04:01:17.324660 | orchestrator | =============================================================================== 2026-03-19 04:01:17.324673 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 54.46s 2026-03-19 04:01:17.324685 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.35s 2026-03-19 04:01:17.324699 | orchestrator | Manage labels ----------------------------------------------------------- 9.68s 2026-03-19 04:01:17.324711 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.31s 2026-03-19 04:01:17.324723 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.74s 2026-03-19 04:01:17.324735 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.29s 2026-03-19 04:01:17.324747 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.81s 2026-03-19 04:01:17.324759 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.48s 2026-03-19 04:01:17.324771 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.29s 2026-03-19 04:01:17.324783 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 3.37s 2026-03-19 04:01:17.324795 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.17s 2026-03-19 04:01:17.324807 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.94s 2026-03-19 04:01:17.324819 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.87s 2026-03-19 04:01:17.324830 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.84s 2026-03-19 04:01:17.324841 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.81s 2026-03-19 04:01:17.324853 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.69s 2026-03-19 04:01:17.324865 | orchestrator | kubectl : Install required packages ------------------------------------- 2.66s 2026-03-19 04:01:17.324877 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 2.64s 2026-03-19 04:01:17.324903 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.60s 2026-03-19 04:01:17.797397 | orchestrator | Manage taints ----------------------------------------------------------- 2.53s 2026-03-19 04:01:18.134094 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-19 04:01:18.134198 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-19 04:01:18.141771 | orchestrator | + set -e 2026-03-19 04:01:18.142010 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 04:01:18.142090 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 04:01:18.142112 | orchestrator | ++ INTERACTIVE=false 2026-03-19 04:01:18.142131 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 04:01:18.142149 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 04:01:18.142168 | orchestrator | + osism apply openstackclient 2026-03-19 04:01:30.191391 | orchestrator | 2026-03-19 04:01:30 | INFO  | Task cef853ee-a2ad-4c6e-aebd-6036c8e16061 (openstackclient) was prepared for execution. 2026-03-19 04:01:30.191508 | orchestrator | 2026-03-19 04:01:30 | INFO  | It takes a moment until task cef853ee-a2ad-4c6e-aebd-6036c8e16061 (openstackclient) has been started and output is visible here. 2026-03-19 04:01:56.850395 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-19 04:01:56.850537 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-19 04:01:56.850568 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-19 04:01:56.850653 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-19 04:01:56.850685 | orchestrator | 2026-03-19 04:01:56.850697 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-19 04:01:56.850708 | orchestrator | 2026-03-19 04:01:56.850719 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-19 04:01:56.850730 | orchestrator | Thursday 19 March 2026 04:01:36 +0000 (0:00:01.525) 0:00:01.525 ******** 2026-03-19 04:01:56.850743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-19 04:01:56.850762 | orchestrator | 2026-03-19 04:01:56.850788 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-19 04:01:56.850810 | orchestrator | Thursday 19 March 2026 04:01:37 +0000 (0:00:00.863) 0:00:02.389 ******** 2026-03-19 04:01:56.850828 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-19 04:01:56.850846 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-19 04:01:56.850863 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-19 04:01:56.850880 | orchestrator | 2026-03-19 04:01:56.850898 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-19 04:01:56.850917 | orchestrator | Thursday 19 March 2026 04:01:38 +0000 (0:00:01.378) 0:00:03.767 ******** 2026-03-19 04:01:56.850937 | orchestrator | changed: [testbed-manager] 2026-03-19 04:01:56.850957 | orchestrator | 2026-03-19 04:01:56.850975 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-19 04:01:56.850993 | orchestrator | Thursday 19 March 2026 04:01:39 +0000 (0:00:01.344) 0:00:05.112 ******** 2026-03-19 04:01:56.851007 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:56.851021 | orchestrator | 2026-03-19 04:01:56.851033 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-19 04:01:56.851047 | orchestrator | Thursday 19 March 2026 04:01:41 +0000 (0:00:01.195) 0:00:06.307 ******** 2026-03-19 04:01:56.851061 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:56.851074 | orchestrator | 2026-03-19 04:01:56.851087 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-19 04:01:56.851099 | orchestrator | Thursday 19 March 2026 04:01:42 +0000 (0:00:00.941) 0:00:07.249 ******** 2026-03-19 04:01:56.851112 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-19 04:01:56.851125 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-19 04:01:56.851151 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:56.851163 | orchestrator | 2026-03-19 04:01:56.851176 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-19 04:01:56.851189 | orchestrator | Thursday 19 March 2026 04:01:42 +0000 (0:00:00.693) 0:00:07.942 ******** 2026-03-19 04:01:56.851202 | orchestrator | changed: [testbed-manager] 2026-03-19 04:01:56.851298 | orchestrator | 2026-03-19 04:01:56.851311 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-19 04:01:56.851322 | orchestrator | Thursday 19 March 2026 04:01:53 +0000 (0:00:10.690) 0:00:18.632 ******** 2026-03-19 04:01:56.851333 | orchestrator | changed: [testbed-manager] 2026-03-19 04:01:56.851344 | orchestrator | 2026-03-19 04:01:56.851381 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-19 04:01:56.851392 | orchestrator | Thursday 19 March 2026 04:01:54 +0000 (0:00:01.305) 0:00:19.938 ******** 2026-03-19 04:01:56.851403 | orchestrator | changed: [testbed-manager] 2026-03-19 04:01:56.851414 | orchestrator | 2026-03-19 04:01:56.851425 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-19 04:01:56.851435 | orchestrator | Thursday 19 March 2026 04:01:55 +0000 (0:00:00.629) 0:00:20.568 ******** 2026-03-19 04:01:56.851446 | orchestrator | ok: [testbed-manager] 2026-03-19 04:01:56.851457 | orchestrator | 2026-03-19 04:01:56.851468 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:01:56.851479 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-19 04:01:56.851491 | orchestrator | 2026-03-19 04:01:56.851502 | orchestrator | 2026-03-19 04:01:56.851512 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:01:56.851523 | orchestrator | Thursday 19 March 2026 04:01:56 +0000 (0:00:01.138) 0:00:21.706 ******** 2026-03-19 04:01:56.851534 | orchestrator | =============================================================================== 2026-03-19 04:01:56.851545 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.69s 2026-03-19 04:01:56.851556 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.38s 2026-03-19 04:01:56.851567 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.34s 2026-03-19 04:01:56.851612 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.31s 2026-03-19 04:01:56.851630 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.20s 2026-03-19 04:01:56.851646 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.14s 2026-03-19 04:01:56.851679 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.94s 2026-03-19 04:01:56.851699 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.86s 2026-03-19 04:01:56.851711 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.69s 2026-03-19 04:01:56.851722 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.63s 2026-03-19 04:01:57.209098 | orchestrator | + osism apply -a upgrade common 2026-03-19 04:01:59.369631 | orchestrator | 2026-03-19 04:01:59 | INFO  | Task 13cce5b2-21c2-4e4a-a0fd-fc2f708144b4 (common) was prepared for execution. 2026-03-19 04:01:59.369735 | orchestrator | 2026-03-19 04:01:59 | INFO  | It takes a moment until task 13cce5b2-21c2-4e4a-a0fd-fc2f708144b4 (common) has been started and output is visible here. 2026-03-19 04:02:19.732420 | orchestrator | 2026-03-19 04:02:19.732518 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-19 04:02:19.732527 | orchestrator | 2026-03-19 04:02:19.732534 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 04:02:19.732540 | orchestrator | Thursday 19 March 2026 04:02:06 +0000 (0:00:02.206) 0:00:02.206 ******** 2026-03-19 04:02:19.732545 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:02:19.732552 | orchestrator | 2026-03-19 04:02:19.732558 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-19 04:02:19.732563 | orchestrator | Thursday 19 March 2026 04:02:09 +0000 (0:00:03.864) 0:00:06.071 ******** 2026-03-19 04:02:19.732569 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732575 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732580 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732585 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732652 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732661 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732666 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732671 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732676 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732681 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732687 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732692 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:02:19.732697 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732702 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732708 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732713 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732718 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732723 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:02:19.732728 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732734 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732739 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:02:19.732744 | orchestrator | 2026-03-19 04:02:19.732749 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 04:02:19.732754 | orchestrator | Thursday 19 March 2026 04:02:13 +0000 (0:00:03.903) 0:00:09.974 ******** 2026-03-19 04:02:19.732759 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:02:19.732766 | orchestrator | 2026-03-19 04:02:19.732772 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-19 04:02:19.732777 | orchestrator | Thursday 19 March 2026 04:02:16 +0000 (0:00:03.128) 0:00:13.103 ******** 2026-03-19 04:02:19.732786 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732840 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732846 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:19.732974 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732990 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:19.732999 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:19.733023 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630209 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630238 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630257 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630273 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630288 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630304 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630371 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630394 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630412 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630426 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:22.630441 | orchestrator | 2026-03-19 04:02:22.630456 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-19 04:02:22.630471 | orchestrator | Thursday 19 March 2026 04:02:21 +0000 (0:00:04.818) 0:00:17.921 ******** 2026-03-19 04:02:22.630489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:22.630505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:22.630520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:22.630544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:22.630574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.185929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:25.186079 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:02:25.186088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186097 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:02:25.186148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:25.186163 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:02:25.186188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:25.186230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:25.186247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186254 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:02:25.186261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186270 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:02:25.186277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:25.186284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:25.186313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.447861 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:02:28.447950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.447963 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:02:28.447970 | orchestrator | 2026-03-19 04:02:28.447977 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-19 04:02:28.447985 | orchestrator | Thursday 19 March 2026 04:02:25 +0000 (0:00:03.426) 0:00:21.347 ******** 2026-03-19 04:02:28.447993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448094 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:02:28.448101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448107 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:02:28.448114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448132 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:02:28.448139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:28.448153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:28.448177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:02:40.779590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779706 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:02:40.779721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779743 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:02:40.779753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779763 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:02:40.779773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:02:40.779784 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:02:40.779794 | orchestrator | 2026-03-19 04:02:40.779805 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-19 04:02:40.779829 | orchestrator | Thursday 19 March 2026 04:02:28 +0000 (0:00:03.274) 0:00:24.622 ******** 2026-03-19 04:02:40.779840 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:02:40.779849 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:02:40.779859 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:02:40.779868 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:02:40.779878 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:02:40.779887 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:02:40.779912 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:02:40.779922 | orchestrator | 2026-03-19 04:02:40.779932 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-19 04:02:40.779942 | orchestrator | Thursday 19 March 2026 04:02:30 +0000 (0:00:02.234) 0:00:26.856 ******** 2026-03-19 04:02:40.779951 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:02:40.779961 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:02:40.779971 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:02:40.779980 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:02:40.779990 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:02:40.780001 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:02:40.780013 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:02:40.780024 | orchestrator | 2026-03-19 04:02:40.780044 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-19 04:02:40.780055 | orchestrator | Thursday 19 March 2026 04:02:32 +0000 (0:00:02.133) 0:00:28.990 ******** 2026-03-19 04:02:40.780066 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:02:40.780078 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:02:40.780089 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:02:40.780100 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:02:40.780113 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:02:40.780124 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:02:40.780136 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:02:40.780147 | orchestrator | 2026-03-19 04:02:40.780158 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-19 04:02:40.780170 | orchestrator | Thursday 19 March 2026 04:02:34 +0000 (0:00:02.033) 0:00:31.023 ******** 2026-03-19 04:02:40.780182 | orchestrator | changed: [testbed-manager] 2026-03-19 04:02:40.780193 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:02:40.780204 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:02:40.780216 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:02:40.780228 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:02:40.780239 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:02:40.780249 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:02:40.780261 | orchestrator | 2026-03-19 04:02:40.780272 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-19 04:02:40.780283 | orchestrator | Thursday 19 March 2026 04:02:37 +0000 (0:00:02.927) 0:00:33.951 ******** 2026-03-19 04:02:40.780296 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:40.780308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:40.780320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:40.780332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:40.780358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:42.769178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:42.769254 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:02:42.769267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769317 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:02:42.769380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:02.835053 | orchestrator | 2026-03-19 04:03:02.835157 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-19 04:03:02.835172 | orchestrator | Thursday 19 March 2026 04:02:42 +0000 (0:00:04.994) 0:00:38.946 ******** 2026-03-19 04:03:02.835182 | orchestrator | [WARNING]: Skipped 2026-03-19 04:03:02.835197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-19 04:03:02.835214 | orchestrator | to this access issue: 2026-03-19 04:03:02.835229 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-19 04:03:02.835255 | orchestrator | directory 2026-03-19 04:03:02.835270 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:03:02.835285 | orchestrator | 2026-03-19 04:03:02.835301 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-19 04:03:02.835317 | orchestrator | Thursday 19 March 2026 04:02:45 +0000 (0:00:02.496) 0:00:41.442 ******** 2026-03-19 04:03:02.835333 | orchestrator | [WARNING]: Skipped 2026-03-19 04:03:02.835347 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-19 04:03:02.835360 | orchestrator | to this access issue: 2026-03-19 04:03:02.835376 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-19 04:03:02.835391 | orchestrator | directory 2026-03-19 04:03:02.835405 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:03:02.835415 | orchestrator | 2026-03-19 04:03:02.835425 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-19 04:03:02.835434 | orchestrator | Thursday 19 March 2026 04:02:47 +0000 (0:00:01.859) 0:00:43.302 ******** 2026-03-19 04:03:02.835443 | orchestrator | [WARNING]: Skipped 2026-03-19 04:03:02.835452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-19 04:03:02.835461 | orchestrator | to this access issue: 2026-03-19 04:03:02.835470 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-19 04:03:02.835479 | orchestrator | directory 2026-03-19 04:03:02.835487 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:03:02.835496 | orchestrator | 2026-03-19 04:03:02.835504 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-19 04:03:02.835513 | orchestrator | Thursday 19 March 2026 04:02:48 +0000 (0:00:01.862) 0:00:45.164 ******** 2026-03-19 04:03:02.835521 | orchestrator | [WARNING]: Skipped 2026-03-19 04:03:02.835530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-19 04:03:02.835539 | orchestrator | to this access issue: 2026-03-19 04:03:02.835547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-19 04:03:02.835556 | orchestrator | directory 2026-03-19 04:03:02.835565 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:03:02.835596 | orchestrator | 2026-03-19 04:03:02.835608 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-19 04:03:02.835618 | orchestrator | Thursday 19 March 2026 04:02:50 +0000 (0:00:01.869) 0:00:47.034 ******** 2026-03-19 04:03:02.835629 | orchestrator | changed: [testbed-manager] 2026-03-19 04:03:02.835639 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:03:02.835649 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:03:02.835659 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:03:02.835726 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:03:02.835739 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:03:02.835750 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:03:02.835760 | orchestrator | 2026-03-19 04:03:02.835770 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-19 04:03:02.835780 | orchestrator | Thursday 19 March 2026 04:02:54 +0000 (0:00:03.886) 0:00:50.920 ******** 2026-03-19 04:03:02.835791 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835802 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835813 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835822 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835832 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835842 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835852 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:03:02.835862 | orchestrator | 2026-03-19 04:03:02.835872 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-19 04:03:02.835883 | orchestrator | Thursday 19 March 2026 04:02:58 +0000 (0:00:03.483) 0:00:54.404 ******** 2026-03-19 04:03:02.835893 | orchestrator | ok: [testbed-manager] 2026-03-19 04:03:02.835919 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:03:02.835929 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:03:02.835940 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:03:02.835950 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:03:02.835960 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:03:02.835970 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:03:02.835980 | orchestrator | 2026-03-19 04:03:02.835990 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-19 04:03:02.835999 | orchestrator | Thursday 19 March 2026 04:03:01 +0000 (0:00:02.832) 0:00:57.236 ******** 2026-03-19 04:03:02.836027 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:02.836040 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:02.836050 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:02.836067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:02.836078 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:02.836089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:02.836103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:02.836117 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:10.240345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:10.240467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240509 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:10.240519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:10.240528 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:10.240537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:10.240558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240595 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:10.240611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:10.240620 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240636 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:10.240645 | orchestrator | 2026-03-19 04:03:10.240654 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-19 04:03:10.240664 | orchestrator | Thursday 19 March 2026 04:03:03 +0000 (0:00:02.875) 0:01:00.112 ******** 2026-03-19 04:03:10.240672 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240725 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240735 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240743 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240751 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240758 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240766 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:03:10.240774 | orchestrator | 2026-03-19 04:03:10.240782 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-19 04:03:10.240796 | orchestrator | Thursday 19 March 2026 04:03:07 +0000 (0:00:03.157) 0:01:03.269 ******** 2026-03-19 04:03:10.240805 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:10.240813 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:10.240821 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:10.240830 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:10.240844 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:10.240860 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:12.722255 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:03:12.722358 | orchestrator | 2026-03-19 04:03:12.722375 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-19 04:03:12.722388 | orchestrator | Thursday 19 March 2026 04:03:10 +0000 (0:00:03.148) 0:01:06.417 ******** 2026-03-19 04:03:12.722403 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:12.722565 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:12.722668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.225950 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:17.226106 | orchestrator | 2026-03-19 04:03:17.226111 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-19 04:03:17.226116 | orchestrator | Thursday 19 March 2026 04:03:14 +0000 (0:00:04.329) 0:01:10.747 ******** 2026-03-19 04:03:17.226147 | orchestrator | changed: [testbed-manager] => { 2026-03-19 04:03:17.226153 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226158 | orchestrator | } 2026-03-19 04:03:17.226161 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:03:17.226165 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226169 | orchestrator | } 2026-03-19 04:03:17.226173 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:03:17.226177 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226180 | orchestrator | } 2026-03-19 04:03:17.226184 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:03:17.226188 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226191 | orchestrator | } 2026-03-19 04:03:17.226195 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 04:03:17.226199 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226203 | orchestrator | } 2026-03-19 04:03:17.226206 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 04:03:17.226210 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226214 | orchestrator | } 2026-03-19 04:03:17.226218 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 04:03:17.226222 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:03:17.226226 | orchestrator | } 2026-03-19 04:03:17.226230 | orchestrator | 2026-03-19 04:03:17.226233 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:03:17.226237 | orchestrator | Thursday 19 March 2026 04:03:16 +0000 (0:00:02.100) 0:01:12.847 ******** 2026-03-19 04:03:17.226253 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:17.226260 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:17.226264 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:17.226268 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:03:17.226272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:17.226280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:17.226284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:17.226289 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:03:17.226293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:17.226302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611110 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:03:23.611123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:23.611134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611173 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:03:23.611195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:23.611206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:23.611243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611264 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:03:23.611272 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:03:23.611279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:23.611286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:23.611304 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:03:23.611311 | orchestrator | 2026-03-19 04:03:23.611317 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611325 | orchestrator | Thursday 19 March 2026 04:03:19 +0000 (0:00:03.041) 0:01:15.889 ******** 2026-03-19 04:03:23.611331 | orchestrator | 2026-03-19 04:03:23.611337 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611344 | orchestrator | Thursday 19 March 2026 04:03:20 +0000 (0:00:00.457) 0:01:16.347 ******** 2026-03-19 04:03:23.611350 | orchestrator | 2026-03-19 04:03:23.611357 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611364 | orchestrator | Thursday 19 March 2026 04:03:20 +0000 (0:00:00.477) 0:01:16.824 ******** 2026-03-19 04:03:23.611371 | orchestrator | 2026-03-19 04:03:23.611378 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611385 | orchestrator | Thursday 19 March 2026 04:03:21 +0000 (0:00:00.436) 0:01:17.261 ******** 2026-03-19 04:03:23.611392 | orchestrator | 2026-03-19 04:03:23.611399 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611406 | orchestrator | Thursday 19 March 2026 04:03:21 +0000 (0:00:00.446) 0:01:17.707 ******** 2026-03-19 04:03:23.611413 | orchestrator | 2026-03-19 04:03:23.611420 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611428 | orchestrator | Thursday 19 March 2026 04:03:22 +0000 (0:00:00.754) 0:01:18.462 ******** 2026-03-19 04:03:23.611435 | orchestrator | 2026-03-19 04:03:23.611442 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:03:23.611449 | orchestrator | Thursday 19 March 2026 04:03:22 +0000 (0:00:00.459) 0:01:18.922 ******** 2026-03-19 04:03:23.611455 | orchestrator | 2026-03-19 04:03:23.611467 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-19 04:03:26.407186 | orchestrator | Thursday 19 March 2026 04:03:23 +0000 (0:00:00.848) 0:01:19.770 ******** 2026-03-19 04:03:26.407286 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fclnx1k0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fclnx1k0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fclnx1k0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:26.407346 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dqimjiex/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dqimjiex/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dqimjiex/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:26.407364 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__aiypitu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__aiypitu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__aiypitu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:26.407377 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ngb7aojh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ngb7aojh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ngb7aojh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:29.876124 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fes9h657/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fes9h657/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fes9h657/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:29.876298 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yu_1vbr7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yu_1vbr7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yu_1vbr7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:29.876340 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_11g407i_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_11g407i_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_11g407i_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-19 04:03:29.876352 | orchestrator | 2026-03-19 04:03:29.876363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:03:29.876373 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876394 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876411 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876419 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876427 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876435 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876443 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-19 04:03:29.876451 | orchestrator | 2026-03-19 04:03:29.876487 | orchestrator | 2026-03-19 04:03:29.876503 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:03:30.395524 | orchestrator | 2026-03-19 04:03:30 | INFO  | Task dde84a37-d403-4d7b-bd9a-db09bb68337b (common) was prepared for execution. 2026-03-19 04:03:30.395640 | orchestrator | 2026-03-19 04:03:30 | INFO  | It takes a moment until task dde84a37-d403-4d7b-bd9a-db09bb68337b (common) has been started and output is visible here. 2026-03-19 04:03:49.088963 | orchestrator | Thursday 19 March 2026 04:03:29 +0000 (0:00:06.282) 0:01:26.053 ******** 2026-03-19 04:03:49.089082 | orchestrator | =============================================================================== 2026-03-19 04:03:49.089100 | orchestrator | common : Restart fluentd container -------------------------------------- 6.28s 2026-03-19 04:03:49.089112 | orchestrator | common : Copying over config.json files for services -------------------- 4.99s 2026-03-19 04:03:49.089123 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.82s 2026-03-19 04:03:49.089135 | orchestrator | service-check-containers : common | Check containers -------------------- 4.33s 2026-03-19 04:03:49.089147 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.90s 2026-03-19 04:03:49.089158 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.89s 2026-03-19 04:03:49.089169 | orchestrator | common : Flush handlers ------------------------------------------------- 3.88s 2026-03-19 04:03:49.089181 | orchestrator | common : include_tasks -------------------------------------------------- 3.87s 2026-03-19 04:03:49.089192 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.48s 2026-03-19 04:03:49.089203 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.43s 2026-03-19 04:03:49.089215 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.27s 2026-03-19 04:03:49.089226 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.16s 2026-03-19 04:03:49.089237 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.15s 2026-03-19 04:03:49.089248 | orchestrator | common : include_tasks -------------------------------------------------- 3.13s 2026-03-19 04:03:49.089260 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.04s 2026-03-19 04:03:49.089287 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.93s 2026-03-19 04:03:49.089299 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.88s 2026-03-19 04:03:49.089310 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.83s 2026-03-19 04:03:49.089321 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.50s 2026-03-19 04:03:49.089332 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.23s 2026-03-19 04:03:49.089366 | orchestrator | 2026-03-19 04:03:49.089379 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-19 04:03:49.089390 | orchestrator | 2026-03-19 04:03:49.089401 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 04:03:49.089413 | orchestrator | Thursday 19 March 2026 04:03:36 +0000 (0:00:01.959) 0:00:01.959 ******** 2026-03-19 04:03:49.089424 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:03:49.089437 | orchestrator | 2026-03-19 04:03:49.089448 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-19 04:03:49.089459 | orchestrator | Thursday 19 March 2026 04:03:40 +0000 (0:00:03.249) 0:00:05.209 ******** 2026-03-19 04:03:49.089471 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089482 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089494 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089508 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089521 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089533 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089547 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089561 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089574 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-19 04:03:49.089587 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089600 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089613 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089626 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089639 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089653 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089666 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-19 04:03:49.089679 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089691 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089705 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089718 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089815 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-19 04:03:49.089840 | orchestrator | 2026-03-19 04:03:49.089861 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-19 04:03:49.089880 | orchestrator | Thursday 19 March 2026 04:03:43 +0000 (0:00:03.298) 0:00:08.508 ******** 2026-03-19 04:03:49.089899 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:03:49.089912 | orchestrator | 2026-03-19 04:03:49.089923 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-19 04:03:49.089934 | orchestrator | Thursday 19 March 2026 04:03:46 +0000 (0:00:03.005) 0:00:11.514 ******** 2026-03-19 04:03:49.089948 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.089981 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.089993 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.090005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.090081 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.090095 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:49.090116 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:03:51.506924 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507087 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507095 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507116 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507130 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507140 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507165 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507172 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507179 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:03:51.507187 | orchestrator | 2026-03-19 04:03:51.507195 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-19 04:03:51.507202 | orchestrator | Thursday 19 March 2026 04:03:50 +0000 (0:00:04.569) 0:00:16.083 ******** 2026-03-19 04:03:51.507211 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:51.507241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:53.732186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:53.732369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732429 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:03:53.732448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:53.732563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:53.732583 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:03:53.732604 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:03:53.732622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:53.732642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:53.732846 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:03:53.732878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811195 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:03:54.811314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811350 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:03:54.811365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:54.811380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811405 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:03:54.811416 | orchestrator | 2026-03-19 04:03:54.811428 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-19 04:03:54.811460 | orchestrator | Thursday 19 March 2026 04:03:53 +0000 (0:00:02.740) 0:00:18.824 ******** 2026-03-19 04:03:54.811473 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:54.811485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:54.811496 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:54.811539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:03:54.811595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811606 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:03:54.811618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:03:54.811639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.063588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.063714 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:04:09.063729 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:04:09.063737 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:04:09.063747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:09.063758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.063838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:09.063856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:09.063872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.063886 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:04:09.063919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.063983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.064001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.064010 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:04:09.064018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:09.064036 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:04:09.064044 | orchestrator | 2026-03-19 04:04:09.064053 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-19 04:04:09.064063 | orchestrator | Thursday 19 March 2026 04:03:56 +0000 (0:00:03.230) 0:00:22.054 ******** 2026-03-19 04:04:09.064071 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:04:09.064081 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:04:09.064091 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:04:09.064101 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:04:09.064111 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:04:09.064120 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:04:09.064130 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:04:09.064140 | orchestrator | 2026-03-19 04:04:09.064150 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-19 04:04:09.064160 | orchestrator | Thursday 19 March 2026 04:03:59 +0000 (0:00:02.246) 0:00:24.300 ******** 2026-03-19 04:04:09.064169 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:04:09.064179 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:04:09.064189 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:04:09.064199 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:04:09.064209 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:04:09.064218 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:04:09.064228 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:04:09.064237 | orchestrator | 2026-03-19 04:04:09.064247 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-19 04:04:09.064258 | orchestrator | Thursday 19 March 2026 04:04:01 +0000 (0:00:02.080) 0:00:26.380 ******** 2026-03-19 04:04:09.064272 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:04:09.064284 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:04:09.064297 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:04:09.064310 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:04:09.064324 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:04:09.064334 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:04:09.064344 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:04:09.064353 | orchestrator | 2026-03-19 04:04:09.064363 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-19 04:04:09.064373 | orchestrator | Thursday 19 March 2026 04:04:03 +0000 (0:00:01.984) 0:00:28.365 ******** 2026-03-19 04:04:09.064382 | orchestrator | ok: [testbed-manager] 2026-03-19 04:04:09.064392 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:04:09.064402 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:04:09.064411 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:04:09.064421 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:04:09.064431 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:04:09.064441 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:04:09.064450 | orchestrator | 2026-03-19 04:04:09.064460 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-19 04:04:09.064470 | orchestrator | Thursday 19 March 2026 04:04:06 +0000 (0:00:02.843) 0:00:31.209 ******** 2026-03-19 04:04:09.064478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:09.064495 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968740 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968758 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968826 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:10.968840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.968853 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.968918 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.968944 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.968973 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.968992 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.969010 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.969028 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.969060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:10.969094 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.825600 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.825710 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.825724 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.825736 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.825746 | orchestrator | 2026-03-19 04:04:29.825758 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-19 04:04:29.825769 | orchestrator | Thursday 19 March 2026 04:04:10 +0000 (0:00:04.861) 0:00:36.070 ******** 2026-03-19 04:04:29.825780 | orchestrator | [WARNING]: Skipped 2026-03-19 04:04:29.825790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-19 04:04:29.825867 | orchestrator | to this access issue: 2026-03-19 04:04:29.825881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-19 04:04:29.825891 | orchestrator | directory 2026-03-19 04:04:29.825901 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:04:29.825912 | orchestrator | 2026-03-19 04:04:29.825922 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-19 04:04:29.825932 | orchestrator | Thursday 19 March 2026 04:04:13 +0000 (0:00:02.493) 0:00:38.563 ******** 2026-03-19 04:04:29.825942 | orchestrator | [WARNING]: Skipped 2026-03-19 04:04:29.825952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-19 04:04:29.825982 | orchestrator | to this access issue: 2026-03-19 04:04:29.825993 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-19 04:04:29.826003 | orchestrator | directory 2026-03-19 04:04:29.826013 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:04:29.826070 | orchestrator | 2026-03-19 04:04:29.826080 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-19 04:04:29.826090 | orchestrator | Thursday 19 March 2026 04:04:15 +0000 (0:00:01.887) 0:00:40.451 ******** 2026-03-19 04:04:29.826099 | orchestrator | [WARNING]: Skipped 2026-03-19 04:04:29.826109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-19 04:04:29.826118 | orchestrator | to this access issue: 2026-03-19 04:04:29.826130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-19 04:04:29.826142 | orchestrator | directory 2026-03-19 04:04:29.826152 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:04:29.826163 | orchestrator | 2026-03-19 04:04:29.826174 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-19 04:04:29.826186 | orchestrator | Thursday 19 March 2026 04:04:17 +0000 (0:00:01.831) 0:00:42.283 ******** 2026-03-19 04:04:29.826197 | orchestrator | [WARNING]: Skipped 2026-03-19 04:04:29.826208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-19 04:04:29.826217 | orchestrator | to this access issue: 2026-03-19 04:04:29.826227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-19 04:04:29.826237 | orchestrator | directory 2026-03-19 04:04:29.826246 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-19 04:04:29.826256 | orchestrator | 2026-03-19 04:04:29.826265 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-19 04:04:29.826275 | orchestrator | Thursday 19 March 2026 04:04:18 +0000 (0:00:01.830) 0:00:44.113 ******** 2026-03-19 04:04:29.826284 | orchestrator | ok: [testbed-manager] 2026-03-19 04:04:29.826294 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:04:29.826309 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:04:29.826319 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:04:29.826328 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:04:29.826338 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:04:29.826348 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:04:29.826357 | orchestrator | 2026-03-19 04:04:29.826382 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-19 04:04:29.826393 | orchestrator | Thursday 19 March 2026 04:04:22 +0000 (0:00:03.735) 0:00:47.849 ******** 2026-03-19 04:04:29.826403 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826413 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826423 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826433 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826443 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826453 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826462 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-19 04:04:29.826472 | orchestrator | 2026-03-19 04:04:29.826482 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-19 04:04:29.826491 | orchestrator | Thursday 19 March 2026 04:04:25 +0000 (0:00:03.259) 0:00:51.109 ******** 2026-03-19 04:04:29.826501 | orchestrator | ok: [testbed-manager] 2026-03-19 04:04:29.826511 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:04:29.826520 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:04:29.826530 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:04:29.826547 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:04:29.826557 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:04:29.826571 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:04:29.826587 | orchestrator | 2026-03-19 04:04:29.826614 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-19 04:04:29.826629 | orchestrator | Thursday 19 March 2026 04:04:28 +0000 (0:00:02.842) 0:00:53.952 ******** 2026-03-19 04:04:29.826647 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:29.826667 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:29.826684 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:29.826700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:29.826735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.850641 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:30.850744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.850788 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:30.850801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.850887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:30.850902 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:30.850929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.850960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:30.850973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:30.850996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.851008 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:30.851020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:30.851031 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:30.851043 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:30.851060 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:30.851079 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:40.769394 | orchestrator | 2026-03-19 04:04:40.769518 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-19 04:04:40.769539 | orchestrator | Thursday 19 March 2026 04:04:31 +0000 (0:00:03.154) 0:00:57.106 ******** 2026-03-19 04:04:40.769554 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769570 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769584 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769599 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769613 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769626 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769641 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-19 04:04:40.769654 | orchestrator | 2026-03-19 04:04:40.769667 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-19 04:04:40.769681 | orchestrator | Thursday 19 March 2026 04:04:34 +0000 (0:00:02.998) 0:01:00.105 ******** 2026-03-19 04:04:40.769694 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769708 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769721 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769734 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769746 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769760 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769773 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-19 04:04:40.769787 | orchestrator | 2026-03-19 04:04:40.769801 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-19 04:04:40.769814 | orchestrator | Thursday 19 March 2026 04:04:38 +0000 (0:00:03.385) 0:01:03.490 ******** 2026-03-19 04:04:40.769856 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.769875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.769891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.769951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.769991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.770007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.770076 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:40.770092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-19 04:04:40.770107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:40.770122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:40.770151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:40.770174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.282907 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.282996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:04:45.283115 | orchestrator | 2026-03-19 04:04:45.283124 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-19 04:04:45.283133 | orchestrator | Thursday 19 March 2026 04:04:42 +0000 (0:00:04.434) 0:01:07.925 ******** 2026-03-19 04:04:45.283142 | orchestrator | changed: [testbed-manager] => { 2026-03-19 04:04:45.283150 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283157 | orchestrator | } 2026-03-19 04:04:45.283165 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:04:45.283172 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283179 | orchestrator | } 2026-03-19 04:04:45.283187 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:04:45.283194 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283201 | orchestrator | } 2026-03-19 04:04:45.283208 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:04:45.283215 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283222 | orchestrator | } 2026-03-19 04:04:45.283230 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 04:04:45.283237 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283244 | orchestrator | } 2026-03-19 04:04:45.283251 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 04:04:45.283258 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283266 | orchestrator | } 2026-03-19 04:04:45.283273 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 04:04:45.283280 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:04:45.283287 | orchestrator | } 2026-03-19 04:04:45.283294 | orchestrator | 2026-03-19 04:04:45.283302 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:04:45.283309 | orchestrator | Thursday 19 March 2026 04:04:44 +0000 (0:00:02.101) 0:01:10.027 ******** 2026-03-19 04:04:45.283318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:45.283331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:45.283339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:45.283347 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:04:45.283355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:45.283370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742344 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:04:51.742366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:51.742381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742444 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:04:51.742464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:51.742504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742545 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:04:51.742592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:51.742613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742667 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:04:51.742685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:51.742704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:04:51.742752 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:04:51.742771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-19 04:04:51.742802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:06:20.954641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:06:20.954784 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:06:20.954845 | orchestrator | 2026-03-19 04:06:20.954864 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.954882 | orchestrator | Thursday 19 March 2026 04:04:47 +0000 (0:00:03.059) 0:01:13.087 ******** 2026-03-19 04:06:20.954899 | orchestrator | 2026-03-19 04:06:20.954916 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.954932 | orchestrator | Thursday 19 March 2026 04:04:48 +0000 (0:00:00.431) 0:01:13.519 ******** 2026-03-19 04:06:20.954948 | orchestrator | 2026-03-19 04:06:20.954964 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.955009 | orchestrator | Thursday 19 March 2026 04:04:48 +0000 (0:00:00.446) 0:01:13.966 ******** 2026-03-19 04:06:20.955019 | orchestrator | 2026-03-19 04:06:20.955029 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.955039 | orchestrator | Thursday 19 March 2026 04:04:49 +0000 (0:00:00.429) 0:01:14.395 ******** 2026-03-19 04:06:20.955048 | orchestrator | 2026-03-19 04:06:20.955058 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.955068 | orchestrator | Thursday 19 March 2026 04:04:49 +0000 (0:00:00.417) 0:01:14.812 ******** 2026-03-19 04:06:20.955077 | orchestrator | 2026-03-19 04:06:20.955087 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.955096 | orchestrator | Thursday 19 March 2026 04:04:50 +0000 (0:00:00.737) 0:01:15.550 ******** 2026-03-19 04:06:20.955106 | orchestrator | 2026-03-19 04:06:20.955116 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-19 04:06:20.955127 | orchestrator | Thursday 19 March 2026 04:04:50 +0000 (0:00:00.473) 0:01:16.023 ******** 2026-03-19 04:06:20.955138 | orchestrator | 2026-03-19 04:06:20.955150 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-19 04:06:20.955162 | orchestrator | Thursday 19 March 2026 04:04:51 +0000 (0:00:00.802) 0:01:16.825 ******** 2026-03-19 04:06:20.955173 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:06:20.955184 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:06:20.955196 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:06:20.955206 | orchestrator | changed: [testbed-manager] 2026-03-19 04:06:20.955217 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:06:20.955228 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:06:20.955240 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:06:20.955251 | orchestrator | 2026-03-19 04:06:20.955263 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-19 04:06:20.955275 | orchestrator | Thursday 19 March 2026 04:05:27 +0000 (0:00:35.564) 0:01:52.390 ******** 2026-03-19 04:06:20.955288 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:06:20.955305 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:06:20.955321 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:06:20.955337 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:06:20.955353 | orchestrator | changed: [testbed-manager] 2026-03-19 04:06:20.955370 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:06:20.955388 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:06:20.955405 | orchestrator | 2026-03-19 04:06:20.955421 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-19 04:06:20.955440 | orchestrator | Thursday 19 March 2026 04:06:04 +0000 (0:00:37.442) 0:02:29.832 ******** 2026-03-19 04:06:20.955457 | orchestrator | ok: [testbed-manager] 2026-03-19 04:06:20.955473 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:06:20.955486 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:06:20.955496 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:06:20.955505 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:06:20.955514 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:06:20.955541 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:06:20.955551 | orchestrator | 2026-03-19 04:06:20.955561 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-19 04:06:20.955573 | orchestrator | Thursday 19 March 2026 04:06:07 +0000 (0:00:03.157) 0:02:32.990 ******** 2026-03-19 04:06:20.955603 | orchestrator | changed: [testbed-manager] 2026-03-19 04:06:20.955621 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:06:20.955637 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:06:20.955653 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:06:20.955669 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:06:20.955685 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:06:20.955701 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:06:20.955717 | orchestrator | 2026-03-19 04:06:20.955735 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:06:20.955755 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955774 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955790 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955807 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955847 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955865 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955881 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:06:20.955897 | orchestrator | 2026-03-19 04:06:20.955914 | orchestrator | 2026-03-19 04:06:20.955930 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:06:20.955948 | orchestrator | Thursday 19 March 2026 04:06:20 +0000 (0:00:12.468) 0:02:45.458 ******** 2026-03-19 04:06:20.955964 | orchestrator | =============================================================================== 2026-03-19 04:06:20.956006 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.44s 2026-03-19 04:06:20.956021 | orchestrator | common : Restart fluentd container ------------------------------------- 35.57s 2026-03-19 04:06:20.956035 | orchestrator | common : Restart cron container ---------------------------------------- 12.47s 2026-03-19 04:06:20.956049 | orchestrator | common : Copying over config.json files for services -------------------- 4.86s 2026-03-19 04:06:20.956063 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.57s 2026-03-19 04:06:20.956076 | orchestrator | service-check-containers : common | Check containers -------------------- 4.44s 2026-03-19 04:06:20.956090 | orchestrator | common : Flush handlers ------------------------------------------------- 3.74s 2026-03-19 04:06:20.956104 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.74s 2026-03-19 04:06:20.956118 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.39s 2026-03-19 04:06:20.956132 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.30s 2026-03-19 04:06:20.956146 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.26s 2026-03-19 04:06:20.956160 | orchestrator | common : include_tasks -------------------------------------------------- 3.25s 2026-03-19 04:06:20.956173 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.23s 2026-03-19 04:06:20.956187 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.16s 2026-03-19 04:06:20.956202 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.15s 2026-03-19 04:06:20.956217 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.06s 2026-03-19 04:06:20.956247 | orchestrator | common : include_tasks -------------------------------------------------- 3.01s 2026-03-19 04:06:20.956264 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.00s 2026-03-19 04:06:20.956280 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.84s 2026-03-19 04:06:20.956297 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.84s 2026-03-19 04:06:21.273914 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-19 04:06:23.371522 | orchestrator | 2026-03-19 04:06:23 | INFO  | Task 55e524e6-d502-489d-b92e-0869460a6c95 (loadbalancer) was prepared for execution. 2026-03-19 04:06:23.371649 | orchestrator | 2026-03-19 04:06:23 | INFO  | It takes a moment until task 55e524e6-d502-489d-b92e-0869460a6c95 (loadbalancer) has been started and output is visible here. 2026-03-19 04:07:00.619668 | orchestrator | 2026-03-19 04:07:00.619792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:07:00.619809 | orchestrator | 2026-03-19 04:07:00.619821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:07:00.619849 | orchestrator | Thursday 19 March 2026 04:06:30 +0000 (0:00:01.954) 0:00:01.954 ******** 2026-03-19 04:07:00.619861 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:00.619873 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:00.619885 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:00.619896 | orchestrator | 2026-03-19 04:07:00.619907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:07:00.619919 | orchestrator | Thursday 19 March 2026 04:06:32 +0000 (0:00:01.827) 0:00:03.782 ******** 2026-03-19 04:07:00.619931 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-19 04:07:00.619942 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-19 04:07:00.619953 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-19 04:07:00.619964 | orchestrator | 2026-03-19 04:07:00.619976 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-19 04:07:00.619987 | orchestrator | 2026-03-19 04:07:00.619999 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 04:07:00.620010 | orchestrator | Thursday 19 March 2026 04:06:34 +0000 (0:00:02.009) 0:00:05.792 ******** 2026-03-19 04:07:00.620074 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:07:00.620089 | orchestrator | 2026-03-19 04:07:00.620100 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-19 04:07:00.620111 | orchestrator | Thursday 19 March 2026 04:06:37 +0000 (0:00:02.732) 0:00:08.524 ******** 2026-03-19 04:07:00.620122 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:00.620134 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:00.620144 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:00.620155 | orchestrator | 2026-03-19 04:07:00.620166 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-19 04:07:00.620177 | orchestrator | Thursday 19 March 2026 04:06:39 +0000 (0:00:02.258) 0:00:10.783 ******** 2026-03-19 04:07:00.620188 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:00.620201 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:00.620213 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:00.620226 | orchestrator | 2026-03-19 04:07:00.620239 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-19 04:07:00.620253 | orchestrator | Thursday 19 March 2026 04:06:41 +0000 (0:00:02.349) 0:00:13.132 ******** 2026-03-19 04:07:00.620265 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:00.620278 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:00.620291 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:00.620305 | orchestrator | 2026-03-19 04:07:00.620318 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-19 04:07:00.620331 | orchestrator | Thursday 19 March 2026 04:06:43 +0000 (0:00:01.922) 0:00:15.055 ******** 2026-03-19 04:07:00.620344 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:07:00.620382 | orchestrator | 2026-03-19 04:07:00.620396 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-19 04:07:00.620409 | orchestrator | Thursday 19 March 2026 04:06:45 +0000 (0:00:01.992) 0:00:17.048 ******** 2026-03-19 04:07:00.620422 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:00.620435 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:00.620448 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:00.620462 | orchestrator | 2026-03-19 04:07:00.620475 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-19 04:07:00.620488 | orchestrator | Thursday 19 March 2026 04:06:47 +0000 (0:00:01.843) 0:00:18.891 ******** 2026-03-19 04:07:00.620501 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620515 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620528 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620541 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620554 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620565 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 04:07:00.620577 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 04:07:00.620588 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 04:07:00.620599 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 04:07:00.620610 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-19 04:07:00.620621 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-19 04:07:00.620632 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-19 04:07:00.620642 | orchestrator | 2026-03-19 04:07:00.620653 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 04:07:00.620664 | orchestrator | Thursday 19 March 2026 04:06:51 +0000 (0:00:04.188) 0:00:23.079 ******** 2026-03-19 04:07:00.620675 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-19 04:07:00.620686 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-19 04:07:00.620697 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-19 04:07:00.620708 | orchestrator | 2026-03-19 04:07:00.620719 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 04:07:00.620748 | orchestrator | Thursday 19 March 2026 04:06:53 +0000 (0:00:02.005) 0:00:25.085 ******** 2026-03-19 04:07:00.620760 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-19 04:07:00.620771 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-19 04:07:00.620782 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-19 04:07:00.620793 | orchestrator | 2026-03-19 04:07:00.620810 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 04:07:00.620821 | orchestrator | Thursday 19 March 2026 04:06:55 +0000 (0:00:02.226) 0:00:27.312 ******** 2026-03-19 04:07:00.620833 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-19 04:07:00.620844 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:00.620855 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-19 04:07:00.620866 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:00.620877 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-19 04:07:00.620887 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:00.620898 | orchestrator | 2026-03-19 04:07:00.620909 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-19 04:07:00.620920 | orchestrator | Thursday 19 March 2026 04:06:57 +0000 (0:00:02.026) 0:00:29.338 ******** 2026-03-19 04:07:00.620943 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:00.620961 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:00.620973 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:00.620985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:00.620997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:00.621048 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:11.819183 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:11.819271 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:11.819278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:11.819283 | orchestrator | 2026-03-19 04:07:11.819288 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-19 04:07:11.819293 | orchestrator | Thursday 19 March 2026 04:07:00 +0000 (0:00:02.738) 0:00:32.077 ******** 2026-03-19 04:07:11.819297 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:11.819302 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:11.819306 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:11.819310 | orchestrator | 2026-03-19 04:07:11.819314 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-19 04:07:11.819318 | orchestrator | Thursday 19 March 2026 04:07:02 +0000 (0:00:02.030) 0:00:34.108 ******** 2026-03-19 04:07:11.819322 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-19 04:07:11.819326 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-19 04:07:11.819330 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-19 04:07:11.819334 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-19 04:07:11.819338 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-19 04:07:11.819341 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-19 04:07:11.819345 | orchestrator | 2026-03-19 04:07:11.819349 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-19 04:07:11.819352 | orchestrator | Thursday 19 March 2026 04:07:05 +0000 (0:00:02.897) 0:00:37.005 ******** 2026-03-19 04:07:11.819356 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:11.819360 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:11.819364 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:11.819367 | orchestrator | 2026-03-19 04:07:11.819371 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-19 04:07:11.819375 | orchestrator | Thursday 19 March 2026 04:07:07 +0000 (0:00:02.297) 0:00:39.303 ******** 2026-03-19 04:07:11.819379 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:07:11.819383 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:07:11.819386 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:07:11.819390 | orchestrator | 2026-03-19 04:07:11.819394 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-19 04:07:11.819397 | orchestrator | Thursday 19 March 2026 04:07:10 +0000 (0:00:02.267) 0:00:41.571 ******** 2026-03-19 04:07:11.819401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 04:07:11.819441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:11.819446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:11.819451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:11.819456 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:11.819460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 04:07:11.819464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:11.819468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:11.819478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:11.819482 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:11.819490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 04:07:16.042215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:16.042291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:16.042300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:16.042307 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:16.042314 | orchestrator | 2026-03-19 04:07:16.042321 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-19 04:07:16.042346 | orchestrator | Thursday 19 March 2026 04:07:11 +0000 (0:00:01.694) 0:00:43.266 ******** 2026-03-19 04:07:16.042352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:16.042359 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:16.042377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:16.042396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:16.042402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:16.042408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:16.042418 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:16.042424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:16.042433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:16.042444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:30.064791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:30.064948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f', '__omit_place_holder__0cfe29fc9a77ab5b5e5c0806968304b09dcb234f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-19 04:07:30.064975 | orchestrator | 2026-03-19 04:07:30.065036 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-19 04:07:30.065056 | orchestrator | Thursday 19 March 2026 04:07:16 +0000 (0:00:04.234) 0:00:47.500 ******** 2026-03-19 04:07:30.065099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:30.065250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:30.065266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:30.065287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:30.065304 | orchestrator | 2026-03-19 04:07:30.065321 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-19 04:07:30.065336 | orchestrator | Thursday 19 March 2026 04:07:20 +0000 (0:00:04.888) 0:00:52.389 ******** 2026-03-19 04:07:30.065351 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 04:07:30.065367 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 04:07:30.065382 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-19 04:07:30.065397 | orchestrator | 2026-03-19 04:07:30.065413 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-19 04:07:30.065428 | orchestrator | Thursday 19 March 2026 04:07:23 +0000 (0:00:02.798) 0:00:55.187 ******** 2026-03-19 04:07:30.065444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 04:07:30.065459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 04:07:30.065475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-19 04:07:30.065490 | orchestrator | 2026-03-19 04:07:30.065506 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-19 04:07:30.065522 | orchestrator | Thursday 19 March 2026 04:07:28 +0000 (0:00:04.379) 0:00:59.567 ******** 2026-03-19 04:07:30.065537 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:30.065555 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:30.065582 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:50.914293 | orchestrator | 2026-03-19 04:07:50.914394 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-19 04:07:50.914409 | orchestrator | Thursday 19 March 2026 04:07:30 +0000 (0:00:01.952) 0:01:01.519 ******** 2026-03-19 04:07:50.914422 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 04:07:50.914457 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 04:07:50.914469 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-19 04:07:50.914480 | orchestrator | 2026-03-19 04:07:50.914491 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-19 04:07:50.914502 | orchestrator | Thursday 19 March 2026 04:07:33 +0000 (0:00:03.130) 0:01:04.650 ******** 2026-03-19 04:07:50.914513 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 04:07:50.914525 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 04:07:50.914536 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-19 04:07:50.914547 | orchestrator | 2026-03-19 04:07:50.914558 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-19 04:07:50.914569 | orchestrator | Thursday 19 March 2026 04:07:35 +0000 (0:00:02.821) 0:01:07.472 ******** 2026-03-19 04:07:50.914580 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:07:50.914591 | orchestrator | 2026-03-19 04:07:50.914602 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-19 04:07:50.914612 | orchestrator | Thursday 19 March 2026 04:07:37 +0000 (0:00:01.899) 0:01:09.372 ******** 2026-03-19 04:07:50.914624 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-19 04:07:50.914635 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-19 04:07:50.914646 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-19 04:07:50.914657 | orchestrator | 2026-03-19 04:07:50.914670 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-19 04:07:50.914683 | orchestrator | Thursday 19 March 2026 04:07:40 +0000 (0:00:02.692) 0:01:12.065 ******** 2026-03-19 04:07:50.914696 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-19 04:07:50.914708 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-19 04:07:50.914721 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-19 04:07:50.914733 | orchestrator | 2026-03-19 04:07:50.914746 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-19 04:07:50.914758 | orchestrator | Thursday 19 March 2026 04:07:43 +0000 (0:00:02.644) 0:01:14.709 ******** 2026-03-19 04:07:50.914779 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:50.914798 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:50.914817 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:50.914849 | orchestrator | 2026-03-19 04:07:50.914868 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-19 04:07:50.914886 | orchestrator | Thursday 19 March 2026 04:07:44 +0000 (0:00:01.386) 0:01:16.096 ******** 2026-03-19 04:07:50.914905 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:50.914921 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:50.914938 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:50.914955 | orchestrator | 2026-03-19 04:07:50.914975 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 04:07:50.914993 | orchestrator | Thursday 19 March 2026 04:07:46 +0000 (0:00:01.982) 0:01:18.079 ******** 2026-03-19 04:07:50.915031 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915070 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915145 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915168 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915206 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:07:50.915235 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:50.915268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:50.915298 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:07:54.745926 | orchestrator | 2026-03-19 04:07:54.745993 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 04:07:54.746000 | orchestrator | Thursday 19 March 2026 04:07:50 +0000 (0:00:04.290) 0:01:22.370 ******** 2026-03-19 04:07:54.746007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 04:07:54.746037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:54.746044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:54.746048 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:54.746054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 04:07:54.746069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:54.746102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:54.746107 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:07:54.746122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 04:07:54.746126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:54.746130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:54.746134 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:07:54.746138 | orchestrator | 2026-03-19 04:07:54.746142 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 04:07:54.746146 | orchestrator | Thursday 19 March 2026 04:07:52 +0000 (0:00:01.645) 0:01:24.015 ******** 2026-03-19 04:07:54.746150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 04:07:54.746160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:07:54.746164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:07:54.746168 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:07:54.746176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 04:08:06.384738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:08:06.384917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:08:06.384946 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:06.384967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 04:08:06.385038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:08:06.385060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:08:06.385077 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:06.385094 | orchestrator | 2026-03-19 04:08:06.385217 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-19 04:08:06.385240 | orchestrator | Thursday 19 March 2026 04:07:54 +0000 (0:00:02.187) 0:01:26.203 ******** 2026-03-19 04:08:06.385258 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 04:08:06.385277 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 04:08:06.385295 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-19 04:08:06.385313 | orchestrator | 2026-03-19 04:08:06.385330 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-19 04:08:06.385346 | orchestrator | Thursday 19 March 2026 04:07:57 +0000 (0:00:02.456) 0:01:28.659 ******** 2026-03-19 04:08:06.385361 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 04:08:06.385377 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 04:08:06.385396 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-19 04:08:06.385413 | orchestrator | 2026-03-19 04:08:06.385456 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-19 04:08:06.385477 | orchestrator | Thursday 19 March 2026 04:07:59 +0000 (0:00:02.467) 0:01:31.127 ******** 2026-03-19 04:08:06.385495 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 04:08:06.385513 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 04:08:06.385530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 04:08:06.385546 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:06.385562 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-19 04:08:06.385579 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 04:08:06.385596 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:06.385612 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-19 04:08:06.385628 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:06.385642 | orchestrator | 2026-03-19 04:08:06.385653 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-19 04:08:06.385688 | orchestrator | Thursday 19 March 2026 04:08:02 +0000 (0:00:02.509) 0:01:33.636 ******** 2026-03-19 04:08:06.385707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:08:06.385726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:08:06.385753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:08:06.385770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:08:06.385803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:08:09.970692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:08:09.970828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:08:09.970852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:08:09.970868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:08:09.970882 | orchestrator | 2026-03-19 04:08:09.970899 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-19 04:08:09.970916 | orchestrator | Thursday 19 March 2026 04:08:06 +0000 (0:00:04.209) 0:01:37.846 ******** 2026-03-19 04:08:09.970932 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:08:09.970961 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:08:09.970977 | orchestrator | } 2026-03-19 04:08:09.970992 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:08:09.971008 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:08:09.971017 | orchestrator | } 2026-03-19 04:08:09.971025 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:08:09.971034 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:08:09.971043 | orchestrator | } 2026-03-19 04:08:09.971052 | orchestrator | 2026-03-19 04:08:09.971061 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:08:09.971087 | orchestrator | Thursday 19 March 2026 04:08:07 +0000 (0:00:01.389) 0:01:39.235 ******** 2026-03-19 04:08:09.971107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 04:08:09.971160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:08:09.971179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:08:09.971188 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:09.971197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 04:08:09.971207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:08:09.971220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:08:09.971229 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:09.971239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 04:08:09.971248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:08:09.971269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:08:15.491198 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:15.491275 | orchestrator | 2026-03-19 04:08:15.491281 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-19 04:08:15.491287 | orchestrator | Thursday 19 March 2026 04:08:09 +0000 (0:00:02.186) 0:01:41.422 ******** 2026-03-19 04:08:15.491291 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:08:15.491296 | orchestrator | 2026-03-19 04:08:15.491300 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-19 04:08:15.491304 | orchestrator | Thursday 19 March 2026 04:08:11 +0000 (0:00:01.956) 0:01:43.378 ******** 2026-03-19 04:08:15.491310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:15.491322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:15.491327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:15.491333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:15.491362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:15.491367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:15.491371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:15.491377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:15.491381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:15.491390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:15.491397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174375 | orchestrator | 2026-03-19 04:08:17.174404 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-19 04:08:17.174427 | orchestrator | Thursday 19 March 2026 04:08:16 +0000 (0:00:04.634) 0:01:48.013 ******** 2026-03-19 04:08:17.174472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:17.174500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:17.174522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174600 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:17.174646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:17.174670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:17.174701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:17.174742 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:17.174774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:17.174795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-19 04:08:17.174825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:31.534360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-19 04:08:31.534477 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:31.534495 | orchestrator | 2026-03-19 04:08:31.534508 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-19 04:08:31.534521 | orchestrator | Thursday 19 March 2026 04:08:18 +0000 (0:00:01.692) 0:01:49.705 ******** 2026-03-19 04:08:31.534533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534579 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:31.534590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534637 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:31.534649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:31.534672 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:31.534683 | orchestrator | 2026-03-19 04:08:31.534694 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-19 04:08:31.534705 | orchestrator | Thursday 19 March 2026 04:08:20 +0000 (0:00:02.170) 0:01:51.876 ******** 2026-03-19 04:08:31.534716 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:08:31.534729 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:08:31.534740 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:08:31.534751 | orchestrator | 2026-03-19 04:08:31.534762 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-19 04:08:31.534773 | orchestrator | Thursday 19 March 2026 04:08:22 +0000 (0:00:02.264) 0:01:54.140 ******** 2026-03-19 04:08:31.534784 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:08:31.534795 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:08:31.534806 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:08:31.534816 | orchestrator | 2026-03-19 04:08:31.534828 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-19 04:08:31.534839 | orchestrator | Thursday 19 March 2026 04:08:25 +0000 (0:00:02.884) 0:01:57.025 ******** 2026-03-19 04:08:31.534850 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:08:31.534861 | orchestrator | 2026-03-19 04:08:31.534872 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-19 04:08:31.534883 | orchestrator | Thursday 19 March 2026 04:08:27 +0000 (0:00:01.596) 0:01:58.621 ******** 2026-03-19 04:08:31.534919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:31.534936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:31.534964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:31.534979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:31.534994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:31.535016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:33.156230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156348 | orchestrator | 2026-03-19 04:08:33.156355 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-19 04:08:33.156361 | orchestrator | Thursday 19 March 2026 04:08:31 +0000 (0:00:04.372) 0:02:02.993 ******** 2026-03-19 04:08:33.156368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:33.156374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:33.156422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:33.156428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:33.156438 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:33.156444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:08:33.156453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-19 04:08:48.820594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:08:48.820683 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:48.820694 | orchestrator | 2026-03-19 04:08:48.820701 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-19 04:08:48.820708 | orchestrator | Thursday 19 March 2026 04:08:33 +0000 (0:00:01.624) 0:02:04.618 ******** 2026-03-19 04:08:48.820715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820732 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:48.820738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820751 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:48.820757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:08:48.820769 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:48.820775 | orchestrator | 2026-03-19 04:08:48.820781 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-19 04:08:48.820787 | orchestrator | Thursday 19 March 2026 04:08:34 +0000 (0:00:01.623) 0:02:06.241 ******** 2026-03-19 04:08:48.820793 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:08:48.820800 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:08:48.820806 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:08:48.820812 | orchestrator | 2026-03-19 04:08:48.820818 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-19 04:08:48.820843 | orchestrator | Thursday 19 March 2026 04:08:36 +0000 (0:00:02.104) 0:02:08.345 ******** 2026-03-19 04:08:48.820849 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:08:48.820855 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:08:48.820860 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:08:48.820866 | orchestrator | 2026-03-19 04:08:48.820872 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-19 04:08:48.820877 | orchestrator | Thursday 19 March 2026 04:08:39 +0000 (0:00:02.785) 0:02:11.131 ******** 2026-03-19 04:08:48.820883 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:48.820889 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:48.820895 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:48.820901 | orchestrator | 2026-03-19 04:08:48.820906 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-19 04:08:48.820912 | orchestrator | Thursday 19 March 2026 04:08:40 +0000 (0:00:01.336) 0:02:12.467 ******** 2026-03-19 04:08:48.820918 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:08:48.820924 | orchestrator | 2026-03-19 04:08:48.820929 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-19 04:08:48.820935 | orchestrator | Thursday 19 March 2026 04:08:42 +0000 (0:00:01.668) 0:02:14.135 ******** 2026-03-19 04:08:48.820956 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 04:08:48.820971 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 04:08:48.820977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-19 04:08:48.820983 | orchestrator | 2026-03-19 04:08:48.820989 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-19 04:08:48.820996 | orchestrator | Thursday 19 March 2026 04:08:46 +0000 (0:00:03.536) 0:02:17.671 ******** 2026-03-19 04:08:48.821009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 04:08:48.821015 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:48.821021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 04:08:48.821027 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:48.821042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-19 04:08:59.954662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:59.954779 | orchestrator | 2026-03-19 04:08:59.954796 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-19 04:08:59.954810 | orchestrator | Thursday 19 March 2026 04:08:48 +0000 (0:00:02.607) 0:02:20.279 ******** 2026-03-19 04:08:59.954823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954852 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:59.954890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954914 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:59.954925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-19 04:08:59.954949 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:59.954960 | orchestrator | 2026-03-19 04:08:59.954971 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-19 04:08:59.954987 | orchestrator | Thursday 19 March 2026 04:08:51 +0000 (0:00:02.644) 0:02:22.924 ******** 2026-03-19 04:08:59.955007 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:59.955026 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:59.955046 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:59.955065 | orchestrator | 2026-03-19 04:08:59.955080 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-19 04:08:59.955091 | orchestrator | Thursday 19 March 2026 04:08:52 +0000 (0:00:01.359) 0:02:24.283 ******** 2026-03-19 04:08:59.955101 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:08:59.955112 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:08:59.955123 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:08:59.955134 | orchestrator | 2026-03-19 04:08:59.955144 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-19 04:08:59.955155 | orchestrator | Thursday 19 March 2026 04:08:54 +0000 (0:00:01.983) 0:02:26.266 ******** 2026-03-19 04:08:59.955166 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:08:59.955207 | orchestrator | 2026-03-19 04:08:59.955219 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-19 04:08:59.955247 | orchestrator | Thursday 19 March 2026 04:08:56 +0000 (0:00:01.589) 0:02:27.856 ******** 2026-03-19 04:08:59.955286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:59.955314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:08:59.955329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:08:59.955343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:08:59.955361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:08:59.955382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:09:01.922890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.922988 | orchestrator | 2026-03-19 04:09:01.923001 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-19 04:09:01.923013 | orchestrator | Thursday 19 March 2026 04:09:01 +0000 (0:00:04.688) 0:02:32.545 ******** 2026-03-19 04:09:01.923026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:01.923039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.923052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.923069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:01.923089 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:01.923110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:13.231873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232034 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:13.232066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:13.232107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-19 04:09:13.232163 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:13.232175 | orchestrator | 2026-03-19 04:09:13.232216 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-19 04:09:13.232230 | orchestrator | Thursday 19 March 2026 04:09:03 +0000 (0:00:01.944) 0:02:34.489 ******** 2026-03-19 04:09:13.232242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232269 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:13.232281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232313 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:13.232329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:13.232352 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:13.232364 | orchestrator | 2026-03-19 04:09:13.232378 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-19 04:09:13.232392 | orchestrator | Thursday 19 March 2026 04:09:05 +0000 (0:00:02.049) 0:02:36.539 ******** 2026-03-19 04:09:13.232405 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:13.232419 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:13.232432 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:13.232444 | orchestrator | 2026-03-19 04:09:13.232457 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-19 04:09:13.232471 | orchestrator | Thursday 19 March 2026 04:09:07 +0000 (0:00:02.372) 0:02:38.912 ******** 2026-03-19 04:09:13.232484 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:13.232496 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:13.232509 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:13.232521 | orchestrator | 2026-03-19 04:09:13.232533 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-19 04:09:13.232546 | orchestrator | Thursday 19 March 2026 04:09:10 +0000 (0:00:02.867) 0:02:41.779 ******** 2026-03-19 04:09:13.232559 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:13.232572 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:13.232584 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:13.232595 | orchestrator | 2026-03-19 04:09:13.232606 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-19 04:09:13.232618 | orchestrator | Thursday 19 March 2026 04:09:11 +0000 (0:00:01.549) 0:02:43.329 ******** 2026-03-19 04:09:13.232629 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:13.232640 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:13.232658 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:18.547079 | orchestrator | 2026-03-19 04:09:18.547172 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-19 04:09:18.547183 | orchestrator | Thursday 19 March 2026 04:09:13 +0000 (0:00:01.362) 0:02:44.692 ******** 2026-03-19 04:09:18.547190 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:09:18.547227 | orchestrator | 2026-03-19 04:09:18.547234 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-19 04:09:18.547240 | orchestrator | Thursday 19 March 2026 04:09:14 +0000 (0:00:01.720) 0:02:46.412 ******** 2026-03-19 04:09:18.547251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:09:18.547282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:18.547302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:09:18.547367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:18.547374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:18.547387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:09:20.346115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:20.346278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:20.346404 | orchestrator | 2026-03-19 04:09:20.346420 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-19 04:09:20.346436 | orchestrator | Thursday 19 March 2026 04:09:19 +0000 (0:00:04.761) 0:02:51.173 ******** 2026-03-19 04:09:20.346451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:20.346474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:21.647905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.648012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.648028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.648041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.648053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.648065 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:21.648099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:21.648997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:21.649045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.649057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.649069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.649081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.649092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:21.649123 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:21.649152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:09:36.063465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-19 04:09:36.063615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-19 04:09:36.063637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-19 04:09:36.063650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-19 04:09:36.063702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:09:36.063715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-19 04:09:36.063728 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:36.063741 | orchestrator | 2026-03-19 04:09:36.063753 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-19 04:09:36.063766 | orchestrator | Thursday 19 March 2026 04:09:21 +0000 (0:00:01.937) 0:02:53.111 ******** 2026-03-19 04:09:36.063795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063825 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:36.063836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063859 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:36.063870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:09:36.063892 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:36.063904 | orchestrator | 2026-03-19 04:09:36.063915 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-19 04:09:36.063926 | orchestrator | Thursday 19 March 2026 04:09:23 +0000 (0:00:01.929) 0:02:55.041 ******** 2026-03-19 04:09:36.063937 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:36.063949 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:36.063960 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:36.063977 | orchestrator | 2026-03-19 04:09:36.064004 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-19 04:09:36.064028 | orchestrator | Thursday 19 March 2026 04:09:25 +0000 (0:00:02.219) 0:02:57.260 ******** 2026-03-19 04:09:36.064061 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:36.064079 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:36.064098 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:36.064116 | orchestrator | 2026-03-19 04:09:36.064133 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-19 04:09:36.064151 | orchestrator | Thursday 19 March 2026 04:09:28 +0000 (0:00:02.800) 0:03:00.060 ******** 2026-03-19 04:09:36.064167 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:36.064183 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:36.064201 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:36.064249 | orchestrator | 2026-03-19 04:09:36.064268 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-19 04:09:36.064285 | orchestrator | Thursday 19 March 2026 04:09:29 +0000 (0:00:01.373) 0:03:01.434 ******** 2026-03-19 04:09:36.064302 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:09:36.064319 | orchestrator | 2026-03-19 04:09:36.064336 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-19 04:09:36.064352 | orchestrator | Thursday 19 March 2026 04:09:31 +0000 (0:00:01.806) 0:03:03.240 ******** 2026-03-19 04:09:36.064403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 04:09:37.136674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 04:09:37.136865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-19 04:09:37.136927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:37.136970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:37.136994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:39.947500 | orchestrator | 2026-03-19 04:09:39.947606 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-19 04:09:39.947621 | orchestrator | Thursday 19 March 2026 04:09:37 +0000 (0:00:05.367) 0:03:08.608 ******** 2026-03-19 04:09:39.947656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 04:09:39.947675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:39.947738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 04:09:39.947754 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:39.947767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:39.947786 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:39.947808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-19 04:09:57.282625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-19 04:09:57.282752 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:57.282769 | orchestrator | 2026-03-19 04:09:57.282782 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-19 04:09:57.282820 | orchestrator | Thursday 19 March 2026 04:09:41 +0000 (0:00:03.888) 0:03:12.496 ******** 2026-03-19 04:09:57.282842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.282862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.282881 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:57.282901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.282959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.282974 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:57.282986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.282997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-19 04:09:57.283009 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:57.283019 | orchestrator | 2026-03-19 04:09:57.283031 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-19 04:09:57.283042 | orchestrator | Thursday 19 March 2026 04:09:44 +0000 (0:00:03.940) 0:03:16.436 ******** 2026-03-19 04:09:57.283052 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:57.283064 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:57.283084 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:57.283094 | orchestrator | 2026-03-19 04:09:57.283105 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-19 04:09:57.283116 | orchestrator | Thursday 19 March 2026 04:09:47 +0000 (0:00:02.109) 0:03:18.546 ******** 2026-03-19 04:09:57.283127 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:09:57.283137 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:09:57.283148 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:09:57.283159 | orchestrator | 2026-03-19 04:09:57.283169 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-19 04:09:57.283180 | orchestrator | Thursday 19 March 2026 04:09:49 +0000 (0:00:02.769) 0:03:21.316 ******** 2026-03-19 04:09:57.283191 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:09:57.283202 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:09:57.283212 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:09:57.283223 | orchestrator | 2026-03-19 04:09:57.283233 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-19 04:09:57.283312 | orchestrator | Thursday 19 March 2026 04:09:51 +0000 (0:00:01.353) 0:03:22.669 ******** 2026-03-19 04:09:57.283325 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:09:57.283343 | orchestrator | 2026-03-19 04:09:57.283360 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-19 04:09:57.283376 | orchestrator | Thursday 19 March 2026 04:09:52 +0000 (0:00:01.582) 0:03:24.252 ******** 2026-03-19 04:09:57.283395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:09:57.283427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:10:13.977567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:10:13.977691 | orchestrator | 2026-03-19 04:10:13.977707 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-19 04:10:13.977734 | orchestrator | Thursday 19 March 2026 04:09:57 +0000 (0:00:04.490) 0:03:28.743 ******** 2026-03-19 04:10:13.977743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:10:13.977750 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:13.977758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:10:13.977765 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:13.977771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:10:13.977777 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:13.977784 | orchestrator | 2026-03-19 04:10:13.977790 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-19 04:10:13.977796 | orchestrator | Thursday 19 March 2026 04:09:59 +0000 (0:00:01.804) 0:03:30.547 ******** 2026-03-19 04:10:13.977804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977821 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:13.977851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977870 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:13.977876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:10:13.977889 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:13.977895 | orchestrator | 2026-03-19 04:10:13.977901 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-19 04:10:13.977908 | orchestrator | Thursday 19 March 2026 04:10:00 +0000 (0:00:01.450) 0:03:31.998 ******** 2026-03-19 04:10:13.977914 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:13.977921 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:13.977927 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:13.977933 | orchestrator | 2026-03-19 04:10:13.977939 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-19 04:10:13.977945 | orchestrator | Thursday 19 March 2026 04:10:02 +0000 (0:00:02.317) 0:03:34.316 ******** 2026-03-19 04:10:13.977951 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:13.977959 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:13.977969 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:13.977978 | orchestrator | 2026-03-19 04:10:13.977988 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-19 04:10:13.978011 | orchestrator | Thursday 19 March 2026 04:10:05 +0000 (0:00:02.886) 0:03:37.203 ******** 2026-03-19 04:10:13.978067 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:13.978073 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:13.978079 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:13.978085 | orchestrator | 2026-03-19 04:10:13.978092 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-19 04:10:13.978099 | orchestrator | Thursday 19 March 2026 04:10:07 +0000 (0:00:01.398) 0:03:38.602 ******** 2026-03-19 04:10:13.978106 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:10:13.978113 | orchestrator | 2026-03-19 04:10:13.978120 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-19 04:10:13.978127 | orchestrator | Thursday 19 March 2026 04:10:08 +0000 (0:00:01.662) 0:03:40.264 ******** 2026-03-19 04:10:13.978152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 04:10:15.731459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 04:10:15.731609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-19 04:10:15.731676 | orchestrator | 2026-03-19 04:10:15.731691 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-19 04:10:15.731703 | orchestrator | Thursday 19 March 2026 04:10:13 +0000 (0:00:05.176) 0:03:45.441 ******** 2026-03-19 04:10:15.731716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 04:10:15.731730 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:15.731828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 04:10:24.590251 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:24.590419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-19 04:10:24.590480 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:24.590496 | orchestrator | 2026-03-19 04:10:24.590510 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-19 04:10:24.590523 | orchestrator | Thursday 19 March 2026 04:10:15 +0000 (0:00:01.752) 0:03:47.193 ******** 2026-03-19 04:10:24.590537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 04:10:24.590628 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:24.590662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 04:10:24.590705 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:24.590722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-19 04:10:24.590752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-19 04:10:24.590760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-19 04:10:24.590768 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:24.590776 | orchestrator | 2026-03-19 04:10:24.590784 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-19 04:10:24.590793 | orchestrator | Thursday 19 March 2026 04:10:17 +0000 (0:00:01.983) 0:03:49.177 ******** 2026-03-19 04:10:24.590801 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:24.590809 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:24.590817 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:24.590826 | orchestrator | 2026-03-19 04:10:24.590834 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-19 04:10:24.590842 | orchestrator | Thursday 19 March 2026 04:10:20 +0000 (0:00:02.311) 0:03:51.489 ******** 2026-03-19 04:10:24.590850 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:24.590858 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:24.590866 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:24.590874 | orchestrator | 2026-03-19 04:10:24.590881 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-19 04:10:24.590889 | orchestrator | Thursday 19 March 2026 04:10:22 +0000 (0:00:02.980) 0:03:54.469 ******** 2026-03-19 04:10:24.590897 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:24.590905 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:24.590913 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:24.590921 | orchestrator | 2026-03-19 04:10:24.590929 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-19 04:10:24.590937 | orchestrator | Thursday 19 March 2026 04:10:24 +0000 (0:00:01.349) 0:03:55.819 ******** 2026-03-19 04:10:24.590949 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:34.641079 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:34.641184 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:34.641199 | orchestrator | 2026-03-19 04:10:34.641210 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-19 04:10:34.641221 | orchestrator | Thursday 19 March 2026 04:10:25 +0000 (0:00:01.365) 0:03:57.184 ******** 2026-03-19 04:10:34.641230 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:10:34.641240 | orchestrator | 2026-03-19 04:10:34.641249 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-19 04:10:34.641259 | orchestrator | Thursday 19 March 2026 04:10:27 +0000 (0:00:02.104) 0:03:59.289 ******** 2026-03-19 04:10:34.641365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-19 04:10:34.641381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:34.641406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-19 04:10:34.641416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:34.641443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:34.641459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:34.641469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-19 04:10:34.641479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:34.641492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:34.641501 | orchestrator | 2026-03-19 04:10:34.641510 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-19 04:10:34.641521 | orchestrator | Thursday 19 March 2026 04:10:32 +0000 (0:00:04.853) 0:04:04.143 ******** 2026-03-19 04:10:34.641536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-19 04:10:36.311925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:36.312044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:36.312063 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:36.312097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-19 04:10:36.312112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:36.312125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:36.312161 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:36.312208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-19 04:10:36.312231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-19 04:10:36.312250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-19 04:10:36.312266 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:36.312314 | orchestrator | 2026-03-19 04:10:36.312335 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-19 04:10:36.312357 | orchestrator | Thursday 19 March 2026 04:10:34 +0000 (0:00:01.956) 0:04:06.099 ******** 2026-03-19 04:10:36.312388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312420 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:36.312430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312460 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:36.312469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-19 04:10:36.312490 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:36.312499 | orchestrator | 2026-03-19 04:10:36.312509 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-19 04:10:36.312527 | orchestrator | Thursday 19 March 2026 04:10:36 +0000 (0:00:01.669) 0:04:07.769 ******** 2026-03-19 04:10:51.546977 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:51.547129 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:51.547158 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:51.547177 | orchestrator | 2026-03-19 04:10:51.547198 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-19 04:10:51.547217 | orchestrator | Thursday 19 March 2026 04:10:38 +0000 (0:00:02.218) 0:04:09.988 ******** 2026-03-19 04:10:51.547236 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:10:51.547253 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:10:51.547270 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:10:51.547289 | orchestrator | 2026-03-19 04:10:51.547378 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-19 04:10:51.547398 | orchestrator | Thursday 19 March 2026 04:10:41 +0000 (0:00:03.131) 0:04:13.119 ******** 2026-03-19 04:10:51.547416 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:10:51.547435 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:10:51.547451 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:10:51.547468 | orchestrator | 2026-03-19 04:10:51.547486 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-19 04:10:51.547504 | orchestrator | Thursday 19 March 2026 04:10:42 +0000 (0:00:01.345) 0:04:14.464 ******** 2026-03-19 04:10:51.547522 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:10:51.547542 | orchestrator | 2026-03-19 04:10:51.547563 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-19 04:10:51.547584 | orchestrator | Thursday 19 March 2026 04:10:44 +0000 (0:00:01.795) 0:04:16.260 ******** 2026-03-19 04:10:51.547613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:10:51.547701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:10:51.547726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:10:51.547777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:10:51.547799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:10:51.547828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:10:51.547863 | orchestrator | 2026-03-19 04:10:51.547884 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-19 04:10:51.547906 | orchestrator | Thursday 19 March 2026 04:10:49 +0000 (0:00:05.043) 0:04:21.304 ******** 2026-03-19 04:10:51.547927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:10:51.547963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:11:04.516823 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:04.516930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:11:04.516948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:11:04.516981 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:04.517005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:11:04.517014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:11:04.517023 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:04.517031 | orchestrator | 2026-03-19 04:11:04.517040 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-19 04:11:04.517049 | orchestrator | Thursday 19 March 2026 04:10:51 +0000 (0:00:01.702) 0:04:23.007 ******** 2026-03-19 04:11:04.517070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517092 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:04.517100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517117 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:04.517125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:04.517148 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:04.517156 | orchestrator | 2026-03-19 04:11:04.517164 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-19 04:11:04.517172 | orchestrator | Thursday 19 March 2026 04:10:53 +0000 (0:00:01.902) 0:04:24.910 ******** 2026-03-19 04:11:04.517180 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:11:04.517189 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:11:04.517196 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:11:04.517204 | orchestrator | 2026-03-19 04:11:04.517212 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-19 04:11:04.517220 | orchestrator | Thursday 19 March 2026 04:10:55 +0000 (0:00:02.279) 0:04:27.189 ******** 2026-03-19 04:11:04.517228 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:11:04.517236 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:11:04.517244 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:11:04.517252 | orchestrator | 2026-03-19 04:11:04.517264 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-19 04:11:04.517272 | orchestrator | Thursday 19 March 2026 04:10:58 +0000 (0:00:02.979) 0:04:30.169 ******** 2026-03-19 04:11:04.517280 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:11:04.517288 | orchestrator | 2026-03-19 04:11:04.517296 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-19 04:11:04.517304 | orchestrator | Thursday 19 March 2026 04:11:00 +0000 (0:00:02.064) 0:04:32.234 ******** 2026-03-19 04:11:04.517341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:04.517352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:04.517369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:06.199479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:06.199556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:06.199578 | orchestrator | 2026-03-19 04:11:06.199585 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-19 04:11:06.199592 | orchestrator | Thursday 19 March 2026 04:11:05 +0000 (0:00:04.831) 0:04:37.066 ******** 2026-03-19 04:11:06.199600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:11:06.199610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164382 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:09.164438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:11:09.164452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:09.164546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:11:09.164578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-19 04:11:09.164622 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:09.164634 | orchestrator | 2026-03-19 04:11:09.164646 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-19 04:11:09.164658 | orchestrator | Thursday 19 March 2026 04:11:07 +0000 (0:00:01.686) 0:04:38.752 ******** 2026-03-19 04:11:09.164672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:09.164686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:09.164699 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:09.164711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:09.164730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:24.873290 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:24.873492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:24.873540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:11:24.873560 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:24.873577 | orchestrator | 2026-03-19 04:11:24.873595 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-19 04:11:24.873616 | orchestrator | Thursday 19 March 2026 04:11:09 +0000 (0:00:01.873) 0:04:40.625 ******** 2026-03-19 04:11:24.873635 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:11:24.873655 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:11:24.873673 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:11:24.873692 | orchestrator | 2026-03-19 04:11:24.873711 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-19 04:11:24.873730 | orchestrator | Thursday 19 March 2026 04:11:11 +0000 (0:00:02.271) 0:04:42.897 ******** 2026-03-19 04:11:24.873749 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:11:24.873765 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:11:24.873776 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:11:24.873787 | orchestrator | 2026-03-19 04:11:24.873816 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-19 04:11:24.873829 | orchestrator | Thursday 19 March 2026 04:11:14 +0000 (0:00:02.932) 0:04:45.830 ******** 2026-03-19 04:11:24.873842 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:11:24.873854 | orchestrator | 2026-03-19 04:11:24.873866 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-19 04:11:24.873880 | orchestrator | Thursday 19 March 2026 04:11:17 +0000 (0:00:02.816) 0:04:48.647 ******** 2026-03-19 04:11:24.873892 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:11:24.873905 | orchestrator | 2026-03-19 04:11:24.873917 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-19 04:11:24.873930 | orchestrator | Thursday 19 March 2026 04:11:21 +0000 (0:00:04.157) 0:04:52.804 ******** 2026-03-19 04:11:24.873948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:24.874072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:24.874090 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:24.874138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:24.874162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:24.874174 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:24.874197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:28.463565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:28.463689 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:28.463707 | orchestrator | 2026-03-19 04:11:28.463719 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-19 04:11:28.463748 | orchestrator | Thursday 19 March 2026 04:11:24 +0000 (0:00:03.525) 0:04:56.329 ******** 2026-03-19 04:11:28.463765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:28.463805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:28.463845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:28.463868 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:28.463880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:28.463892 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:28.463905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:11:28.463925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-19 04:11:44.799102 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:44.799207 | orchestrator | 2026-03-19 04:11:44.799224 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-19 04:11:44.799236 | orchestrator | Thursday 19 March 2026 04:11:28 +0000 (0:00:03.589) 0:04:59.919 ******** 2026-03-19 04:11:44.799264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799314 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:44.799324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799345 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:44.799435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-19 04:11:44.799456 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:44.799466 | orchestrator | 2026-03-19 04:11:44.799476 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-19 04:11:44.799486 | orchestrator | Thursday 19 March 2026 04:11:32 +0000 (0:00:04.041) 0:05:03.960 ******** 2026-03-19 04:11:44.799496 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:11:44.799522 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:11:44.799533 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:11:44.799543 | orchestrator | 2026-03-19 04:11:44.799552 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-19 04:11:44.799571 | orchestrator | Thursday 19 March 2026 04:11:35 +0000 (0:00:03.150) 0:05:07.112 ******** 2026-03-19 04:11:44.799581 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:44.799591 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:44.799600 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:44.799609 | orchestrator | 2026-03-19 04:11:44.799619 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-19 04:11:44.799629 | orchestrator | Thursday 19 March 2026 04:11:38 +0000 (0:00:02.646) 0:05:09.758 ******** 2026-03-19 04:11:44.799638 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:44.799650 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:44.799661 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:44.799672 | orchestrator | 2026-03-19 04:11:44.799689 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-19 04:11:44.799701 | orchestrator | Thursday 19 March 2026 04:11:39 +0000 (0:00:01.356) 0:05:11.115 ******** 2026-03-19 04:11:44.799713 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:11:44.799724 | orchestrator | 2026-03-19 04:11:44.799735 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-19 04:11:44.799746 | orchestrator | Thursday 19 March 2026 04:11:41 +0000 (0:00:02.150) 0:05:13.266 ******** 2026-03-19 04:11:44.799759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:11:44.799772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:11:44.799784 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:11:44.799796 | orchestrator | 2026-03-19 04:11:44.799808 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-19 04:11:44.799820 | orchestrator | Thursday 19 March 2026 04:11:44 +0000 (0:00:02.511) 0:05:15.777 ******** 2026-03-19 04:11:44.799838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:11:59.167810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:11:59.167904 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:59.167915 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:59.167922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:11:59.167929 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:59.167936 | orchestrator | 2026-03-19 04:11:59.167943 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-19 04:11:59.167950 | orchestrator | Thursday 19 March 2026 04:11:46 +0000 (0:00:01.938) 0:05:17.716 ******** 2026-03-19 04:11:59.167958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 04:11:59.167966 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:59.167973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 04:11:59.167979 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:59.167986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-19 04:11:59.167993 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:59.168004 | orchestrator | 2026-03-19 04:11:59.168015 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-19 04:11:59.168026 | orchestrator | Thursday 19 March 2026 04:11:47 +0000 (0:00:01.417) 0:05:19.133 ******** 2026-03-19 04:11:59.168059 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:59.168066 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:59.168072 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:59.168078 | orchestrator | 2026-03-19 04:11:59.168084 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-19 04:11:59.168091 | orchestrator | Thursday 19 March 2026 04:11:49 +0000 (0:00:01.435) 0:05:20.568 ******** 2026-03-19 04:11:59.168097 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:59.168103 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:59.168109 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:59.168115 | orchestrator | 2026-03-19 04:11:59.168121 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-19 04:11:59.168127 | orchestrator | Thursday 19 March 2026 04:11:51 +0000 (0:00:02.437) 0:05:23.006 ******** 2026-03-19 04:11:59.168134 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:11:59.168140 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:11:59.168146 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:11:59.168152 | orchestrator | 2026-03-19 04:11:59.168158 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-19 04:11:59.168164 | orchestrator | Thursday 19 March 2026 04:11:52 +0000 (0:00:01.357) 0:05:24.364 ******** 2026-03-19 04:11:59.168170 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:11:59.168177 | orchestrator | 2026-03-19 04:11:59.168183 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-19 04:11:59.168189 | orchestrator | Thursday 19 March 2026 04:11:54 +0000 (0:00:01.934) 0:05:26.298 ******** 2026-03-19 04:11:59.168220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:59.168235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.168247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:11:59.168267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:11:59.168287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.401057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.401195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.401225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:11:59.401277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:11:59.401298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.401317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:11:59.401444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.401469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.401489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:11:59.401526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:11:59.401547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:59.401591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:11:59.499903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.500050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:11:59.500075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.500107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:11:59.500141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.500152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:11:59.500170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.500179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:11:59.500188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.500201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.500217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.582470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:11:59.582592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.582605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:11:59.582614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:11:59.582642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:11:59.582667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.582683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.582690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:11:59.582698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:11:59.582705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.582712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:11:59.582724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:11:59.582739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.636547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:12:01.636659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:12:01.636678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:01.636711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:01.636725 | orchestrator | 2026-03-19 04:12:01.636739 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-19 04:12:01.636773 | orchestrator | Thursday 19 March 2026 04:12:00 +0000 (0:00:05.843) 0:05:32.142 ******** 2026-03-19 04:12:01.636807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:01.636822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.636835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:12:01.636854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:12:01.636875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.636895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:01.706604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:01.706732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:01.706756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.706795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:12:01.706841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:12:01.706888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:01.706908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:12:01.706927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.706946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.706975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:12:01.707021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:01.784346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:01.784511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:01.784539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.784616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:12:01.784676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:01.784730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:12:01.784755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:01.784774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:01.784792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.784822 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:01.784836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:01.784858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-19 04:12:02.951837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:12:02.951963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-19 04:12:02.951986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:02.952056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:02.952078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:02.952122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:12:02.952144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:02.952165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:02.952197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:02.952219 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:02.952247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-19 04:12:02.952270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:02.952305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:16.873158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-19 04:12:16.873291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-19 04:12:16.873364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-19 04:12:16.873455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-19 04:12:16.873485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-19 04:12:16.873506 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:16.873527 | orchestrator | 2026-03-19 04:12:16.873547 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-19 04:12:16.873567 | orchestrator | Thursday 19 March 2026 04:12:02 +0000 (0:00:02.273) 0:05:34.415 ******** 2026-03-19 04:12:16.873588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873659 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:16.873674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873701 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:16.873714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:16.873756 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:16.873768 | orchestrator | 2026-03-19 04:12:16.873782 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-19 04:12:16.873794 | orchestrator | Thursday 19 March 2026 04:12:05 +0000 (0:00:02.540) 0:05:36.956 ******** 2026-03-19 04:12:16.873806 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:16.873819 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:16.873831 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:16.873843 | orchestrator | 2026-03-19 04:12:16.873855 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-19 04:12:16.873869 | orchestrator | Thursday 19 March 2026 04:12:07 +0000 (0:00:02.227) 0:05:39.184 ******** 2026-03-19 04:12:16.873881 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:16.873896 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:16.873915 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:16.873934 | orchestrator | 2026-03-19 04:12:16.873953 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-19 04:12:16.873971 | orchestrator | Thursday 19 March 2026 04:12:10 +0000 (0:00:02.680) 0:05:41.864 ******** 2026-03-19 04:12:16.873989 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:12:16.874008 | orchestrator | 2026-03-19 04:12:16.874112 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-19 04:12:16.874125 | orchestrator | Thursday 19 March 2026 04:12:12 +0000 (0:00:02.077) 0:05:43.942 ******** 2026-03-19 04:12:16.874139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:12:16.874165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:12:33.382204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:12:33.382327 | orchestrator | 2026-03-19 04:12:33.382344 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-19 04:12:33.382357 | orchestrator | Thursday 19 March 2026 04:12:16 +0000 (0:00:04.393) 0:05:48.335 ******** 2026-03-19 04:12:33.382387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:12:33.382459 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:33.382475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:12:33.382488 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:33.382519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:12:33.382555 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:33.382567 | orchestrator | 2026-03-19 04:12:33.382578 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-19 04:12:33.382589 | orchestrator | Thursday 19 March 2026 04:12:18 +0000 (0:00:01.769) 0:05:50.104 ******** 2026-03-19 04:12:33.382603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382630 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:33.382641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382683 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:33.382697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:12:33.382710 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:33.382723 | orchestrator | 2026-03-19 04:12:33.382735 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-19 04:12:33.382747 | orchestrator | Thursday 19 March 2026 04:12:20 +0000 (0:00:01.919) 0:05:52.024 ******** 2026-03-19 04:12:33.382759 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:33.382772 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:33.382783 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:33.382796 | orchestrator | 2026-03-19 04:12:33.382810 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-19 04:12:33.382822 | orchestrator | Thursday 19 March 2026 04:12:22 +0000 (0:00:02.298) 0:05:54.322 ******** 2026-03-19 04:12:33.382835 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:33.382847 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:33.382858 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:33.382869 | orchestrator | 2026-03-19 04:12:33.382880 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-19 04:12:33.382899 | orchestrator | Thursday 19 March 2026 04:12:25 +0000 (0:00:02.738) 0:05:57.061 ******** 2026-03-19 04:12:33.382910 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:12:33.382922 | orchestrator | 2026-03-19 04:12:33.382941 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-19 04:12:33.382959 | orchestrator | Thursday 19 March 2026 04:12:27 +0000 (0:00:02.081) 0:05:59.143 ******** 2026-03-19 04:12:33.382990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:34.507731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:34.507741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:34.507764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:34.507778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:12:34.507793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207528 | orchestrator | 2026-03-19 04:12:35.207561 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-19 04:12:35.207584 | orchestrator | Thursday 19 March 2026 04:12:34 +0000 (0:00:06.832) 0:06:05.975 ******** 2026-03-19 04:12:35.207630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:35.207671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:35.207685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207730 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:35.207743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:35.207761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:35.207781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:35.207804 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:35.207826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:54.019983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:12:54.020150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-19 04:12:54.020176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-19 04:12:54.020194 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:54.020215 | orchestrator | 2026-03-19 04:12:54.020232 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-19 04:12:54.020250 | orchestrator | Thursday 19 March 2026 04:12:36 +0000 (0:00:01.860) 0:06:07.836 ******** 2026-03-19 04:12:54.020267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020340 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:12:54.020357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020513 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:12:54.020539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:12:54.020610 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:12:54.020627 | orchestrator | 2026-03-19 04:12:54.020644 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-19 04:12:54.020660 | orchestrator | Thursday 19 March 2026 04:12:38 +0000 (0:00:02.478) 0:06:10.315 ******** 2026-03-19 04:12:54.020676 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:54.020695 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:54.020711 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:54.020728 | orchestrator | 2026-03-19 04:12:54.020743 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-19 04:12:54.020760 | orchestrator | Thursday 19 March 2026 04:12:41 +0000 (0:00:02.253) 0:06:12.568 ******** 2026-03-19 04:12:54.020776 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:12:54.020792 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:12:54.020808 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:12:54.020823 | orchestrator | 2026-03-19 04:12:54.020840 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-19 04:12:54.020856 | orchestrator | Thursday 19 March 2026 04:12:44 +0000 (0:00:02.913) 0:06:15.482 ******** 2026-03-19 04:12:54.020873 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:12:54.020889 | orchestrator | 2026-03-19 04:12:54.020905 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-19 04:12:54.020921 | orchestrator | Thursday 19 March 2026 04:12:46 +0000 (0:00:02.787) 0:06:18.270 ******** 2026-03-19 04:12:54.020937 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-19 04:12:54.020955 | orchestrator | 2026-03-19 04:12:54.020971 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-19 04:12:54.020988 | orchestrator | Thursday 19 March 2026 04:12:48 +0000 (0:00:01.695) 0:06:19.965 ******** 2026-03-19 04:12:54.021006 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 04:12:54.021026 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 04:12:54.021066 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-19 04:13:13.433150 | orchestrator | 2026-03-19 04:13:13.433298 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-19 04:13:13.433328 | orchestrator | Thursday 19 March 2026 04:12:53 +0000 (0:00:05.506) 0:06:25.471 ******** 2026-03-19 04:13:13.433373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.433400 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:13.433423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.433535 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:13.433558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.433579 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:13.433598 | orchestrator | 2026-03-19 04:13:13.433619 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-19 04:13:13.433638 | orchestrator | Thursday 19 March 2026 04:12:56 +0000 (0:00:02.443) 0:06:27.914 ******** 2026-03-19 04:13:13.433659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433706 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:13.433727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433797 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:13.433818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-19 04:13:13.433858 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:13.433877 | orchestrator | 2026-03-19 04:13:13.433896 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 04:13:13.433915 | orchestrator | Thursday 19 March 2026 04:12:58 +0000 (0:00:02.494) 0:06:30.408 ******** 2026-03-19 04:13:13.433934 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:13.433954 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:13.433972 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:13.433992 | orchestrator | 2026-03-19 04:13:13.434011 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 04:13:13.434112 | orchestrator | Thursday 19 March 2026 04:13:02 +0000 (0:00:03.818) 0:06:34.227 ******** 2026-03-19 04:13:13.434132 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:13.434150 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:13.434190 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:13.434227 | orchestrator | 2026-03-19 04:13:13.434246 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-19 04:13:13.434264 | orchestrator | Thursday 19 March 2026 04:13:06 +0000 (0:00:04.049) 0:06:38.277 ******** 2026-03-19 04:13:13.434284 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-19 04:13:13.434303 | orchestrator | 2026-03-19 04:13:13.434331 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-19 04:13:13.434350 | orchestrator | Thursday 19 March 2026 04:13:08 +0000 (0:00:01.696) 0:06:39.974 ******** 2026-03-19 04:13:13.434371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.434391 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:13.434410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.434430 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:13.434479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.434521 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:13.434539 | orchestrator | 2026-03-19 04:13:13.434558 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-19 04:13:13.434575 | orchestrator | Thursday 19 March 2026 04:13:10 +0000 (0:00:02.436) 0:06:42.411 ******** 2026-03-19 04:13:13.434594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.434614 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:13.434633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:13.434653 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:13.434684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-19 04:13:47.155023 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:47.155149 | orchestrator | 2026-03-19 04:13:47.155164 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-19 04:13:47.155176 | orchestrator | Thursday 19 March 2026 04:13:13 +0000 (0:00:02.476) 0:06:44.887 ******** 2026-03-19 04:13:47.155188 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:47.155197 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:47.155207 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:47.155217 | orchestrator | 2026-03-19 04:13:47.155243 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 04:13:47.155253 | orchestrator | Thursday 19 March 2026 04:13:15 +0000 (0:00:02.494) 0:06:47.382 ******** 2026-03-19 04:13:47.155263 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:47.155273 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:47.155283 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:47.155293 | orchestrator | 2026-03-19 04:13:47.155302 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 04:13:47.155312 | orchestrator | Thursday 19 March 2026 04:13:19 +0000 (0:00:03.566) 0:06:50.948 ******** 2026-03-19 04:13:47.155322 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:47.155331 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:47.155341 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:47.155350 | orchestrator | 2026-03-19 04:13:47.155360 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-19 04:13:47.155369 | orchestrator | Thursday 19 March 2026 04:13:23 +0000 (0:00:04.168) 0:06:55.117 ******** 2026-03-19 04:13:47.155379 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-19 04:13:47.155413 | orchestrator | 2026-03-19 04:13:47.155423 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-19 04:13:47.155433 | orchestrator | Thursday 19 March 2026 04:13:25 +0000 (0:00:02.352) 0:06:57.469 ******** 2026-03-19 04:13:47.155445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155459 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:47.155469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155510 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:47.155521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155533 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:47.155544 | orchestrator | 2026-03-19 04:13:47.155556 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-19 04:13:47.155568 | orchestrator | Thursday 19 March 2026 04:13:28 +0000 (0:00:02.392) 0:06:59.862 ******** 2026-03-19 04:13:47.155579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155591 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:47.155626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155639 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:47.155656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-19 04:13:47.155677 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:47.155688 | orchestrator | 2026-03-19 04:13:47.155699 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-19 04:13:47.155710 | orchestrator | Thursday 19 March 2026 04:13:30 +0000 (0:00:02.551) 0:07:02.414 ******** 2026-03-19 04:13:47.155721 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:47.155732 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:47.155761 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:13:47.155772 | orchestrator | 2026-03-19 04:13:47.155793 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-19 04:13:47.155804 | orchestrator | Thursday 19 March 2026 04:13:33 +0000 (0:00:02.410) 0:07:04.825 ******** 2026-03-19 04:13:47.155815 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:47.155826 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:47.155837 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:47.155849 | orchestrator | 2026-03-19 04:13:47.155860 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-19 04:13:47.155871 | orchestrator | Thursday 19 March 2026 04:13:36 +0000 (0:00:03.485) 0:07:08.311 ******** 2026-03-19 04:13:47.155883 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:13:47.155892 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:13:47.155901 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:13:47.155911 | orchestrator | 2026-03-19 04:13:47.155920 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-19 04:13:47.155930 | orchestrator | Thursday 19 March 2026 04:13:41 +0000 (0:00:04.199) 0:07:12.510 ******** 2026-03-19 04:13:47.155939 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:13:47.155949 | orchestrator | 2026-03-19 04:13:47.155958 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-19 04:13:47.155968 | orchestrator | Thursday 19 March 2026 04:13:43 +0000 (0:00:02.379) 0:07:14.889 ******** 2026-03-19 04:13:47.155979 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 04:13:47.155993 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 04:13:47.156023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:49.061948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:49.062188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:13:49.062227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:13:49.062301 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-19 04:13:49.062314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:49.062327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:13:49.062357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:13:49.062370 | orchestrator | 2026-03-19 04:13:49.062383 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-19 04:13:49.062440 | orchestrator | Thursday 19 March 2026 04:13:48 +0000 (0:00:04.899) 0:07:19.789 ******** 2026-03-19 04:13:49.062473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 04:13:50.137098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:50.137186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:13:50.137199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:13:50.137209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:13:50.137239 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:13:50.137262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 04:13:50.137273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:50.137296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:13:50.137304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:13:50.137312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:13:50.137326 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:13:50.137334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-19 04:13:50.137347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-19 04:13:50.137359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-19 04:14:06.429622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-19 04:14:06.429748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-19 04:14:06.429765 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:06.429780 | orchestrator | 2026-03-19 04:14:06.429792 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-19 04:14:06.429804 | orchestrator | Thursday 19 March 2026 04:13:50 +0000 (0:00:01.814) 0:07:21.603 ******** 2026-03-19 04:14:06.429842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429879 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:06.429897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429934 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:06.429952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-19 04:14:06.429987 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:06.430005 | orchestrator | 2026-03-19 04:14:06.430101 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-19 04:14:06.430127 | orchestrator | Thursday 19 March 2026 04:13:52 +0000 (0:00:01.896) 0:07:23.500 ******** 2026-03-19 04:14:06.430289 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:14:06.430305 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:14:06.430319 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:14:06.430346 | orchestrator | 2026-03-19 04:14:06.430358 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-19 04:14:06.430369 | orchestrator | Thursday 19 March 2026 04:13:54 +0000 (0:00:02.254) 0:07:25.754 ******** 2026-03-19 04:14:06.430380 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:14:06.430391 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:14:06.430401 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:14:06.430412 | orchestrator | 2026-03-19 04:14:06.430423 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-19 04:14:06.430433 | orchestrator | Thursday 19 March 2026 04:13:57 +0000 (0:00:02.948) 0:07:28.702 ******** 2026-03-19 04:14:06.430444 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:14:06.430455 | orchestrator | 2026-03-19 04:14:06.430466 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-19 04:14:06.430477 | orchestrator | Thursday 19 March 2026 04:13:59 +0000 (0:00:02.389) 0:07:31.092 ******** 2026-03-19 04:14:06.430619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:06.430708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:06.430731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:06.430763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:14:06.430802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:14:10.130088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:14:10.130198 | orchestrator | 2026-03-19 04:14:10.130215 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-19 04:14:10.130228 | orchestrator | Thursday 19 March 2026 04:14:06 +0000 (0:00:06.791) 0:07:37.884 ******** 2026-03-19 04:14:10.130241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:10.130273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:14:10.130287 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:10.130319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:10.130354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:14:10.130366 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:10.130383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:10.130395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:14:10.130414 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:10.130426 | orchestrator | 2026-03-19 04:14:10.130437 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-19 04:14:10.130448 | orchestrator | Thursday 19 March 2026 04:14:08 +0000 (0:00:02.033) 0:07:39.918 ******** 2026-03-19 04:14:10.130462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:10.130483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051852 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:19.051874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:19.051888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051913 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:19.051924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:19.051935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-19 04:14:19.051958 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:19.051969 | orchestrator | 2026-03-19 04:14:19.051997 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-19 04:14:19.052010 | orchestrator | Thursday 19 March 2026 04:14:10 +0000 (0:00:01.676) 0:07:41.595 ******** 2026-03-19 04:14:19.052021 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:19.052032 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:19.052043 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:19.052053 | orchestrator | 2026-03-19 04:14:19.052064 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-19 04:14:19.052097 | orchestrator | Thursday 19 March 2026 04:14:11 +0000 (0:00:01.436) 0:07:43.031 ******** 2026-03-19 04:14:19.052108 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:19.052119 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:19.052130 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:19.052140 | orchestrator | 2026-03-19 04:14:19.052151 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-19 04:14:19.052162 | orchestrator | Thursday 19 March 2026 04:14:13 +0000 (0:00:02.247) 0:07:45.278 ******** 2026-03-19 04:14:19.052173 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:14:19.052184 | orchestrator | 2026-03-19 04:14:19.052195 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-19 04:14:19.052206 | orchestrator | Thursday 19 March 2026 04:14:16 +0000 (0:00:02.652) 0:07:47.930 ******** 2026-03-19 04:14:19.052238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-19 04:14:19.052257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:19.052272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:19.052287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:19.052306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:19.052329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-19 04:14:19.052344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:19.052365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:20.916027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:20.916124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:20.916153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-19 04:14:20.916182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:20.916192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:20.916200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:20.916222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:20.916231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:20.916249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:20.916258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:20.916269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:20.916289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:23.171380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.171657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.171683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.171696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.171707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.171722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:14:23.171757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:23.171785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.171797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.171808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.171820 | orchestrator | 2026-03-19 04:14:23.171834 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-19 04:14:23.171846 | orchestrator | Thursday 19 March 2026 04:14:22 +0000 (0:00:05.704) 0:07:53.635 ******** 2026-03-19 04:14:23.171858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-19 04:14:23.171871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:23.171893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.306902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.307016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.307032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:23.307044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:23.307053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.307099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.307108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.307116 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:23.307131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-19 04:14:23.307140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:23.307148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.307155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:23.307163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:23.307187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:24.496562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:24.496655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-19 04:14:24.496669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:24.496697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-19 04:14:24.496705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:24.496737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:24.496746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:24.496754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:24.496762 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:24.496771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-19 04:14:24.496780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:14:24.496794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-19 04:14:24.496812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:36.563855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:14:36.564021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-19 04:14:36.564049 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:36.564072 | orchestrator | 2026-03-19 04:14:36.564091 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-19 04:14:36.564112 | orchestrator | Thursday 19 March 2026 04:14:24 +0000 (0:00:02.326) 0:07:55.962 ******** 2026-03-19 04:14:36.564133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564232 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:36.564244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564327 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:36.564339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-19 04:14:36.564366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-19 04:14:36.564400 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:36.564413 | orchestrator | 2026-03-19 04:14:36.564426 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-19 04:14:36.564440 | orchestrator | Thursday 19 March 2026 04:14:26 +0000 (0:00:01.901) 0:07:57.863 ******** 2026-03-19 04:14:36.564453 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:36.564465 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:36.564478 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:36.564490 | orchestrator | 2026-03-19 04:14:36.564502 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-19 04:14:36.564515 | orchestrator | Thursday 19 March 2026 04:14:28 +0000 (0:00:02.024) 0:07:59.888 ******** 2026-03-19 04:14:36.564528 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:36.564582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:36.564595 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:36.564607 | orchestrator | 2026-03-19 04:14:36.564619 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-19 04:14:36.564632 | orchestrator | Thursday 19 March 2026 04:14:30 +0000 (0:00:02.191) 0:08:02.079 ******** 2026-03-19 04:14:36.564645 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:14:36.564657 | orchestrator | 2026-03-19 04:14:36.564669 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-19 04:14:36.564682 | orchestrator | Thursday 19 March 2026 04:14:32 +0000 (0:00:02.227) 0:08:04.307 ******** 2026-03-19 04:14:36.564697 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:14:36.564739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:14:53.710745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:14:53.710942 | orchestrator | 2026-03-19 04:14:53.710975 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-19 04:14:53.710998 | orchestrator | Thursday 19 March 2026 04:14:36 +0000 (0:00:03.710) 0:08:08.018 ******** 2026-03-19 04:14:53.711019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:14:53.711042 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:53.711062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:14:53.711081 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:53.711127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:14:53.711151 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:53.711162 | orchestrator | 2026-03-19 04:14:53.711173 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-19 04:14:53.711184 | orchestrator | Thursday 19 March 2026 04:14:38 +0000 (0:00:01.538) 0:08:09.557 ******** 2026-03-19 04:14:53.711196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 04:14:53.711209 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:53.711220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 04:14:53.711279 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:53.711293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-19 04:14:53.711306 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:53.711318 | orchestrator | 2026-03-19 04:14:53.711331 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-19 04:14:53.711344 | orchestrator | Thursday 19 March 2026 04:14:39 +0000 (0:00:01.429) 0:08:10.986 ******** 2026-03-19 04:14:53.711356 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:53.711369 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:53.711382 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:53.711395 | orchestrator | 2026-03-19 04:14:53.711407 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-19 04:14:53.711419 | orchestrator | Thursday 19 March 2026 04:14:41 +0000 (0:00:01.858) 0:08:12.844 ******** 2026-03-19 04:14:53.711432 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:53.711444 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:53.711456 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:14:53.711469 | orchestrator | 2026-03-19 04:14:53.711482 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-19 04:14:53.711494 | orchestrator | Thursday 19 March 2026 04:14:43 +0000 (0:00:02.178) 0:08:15.023 ******** 2026-03-19 04:14:53.711507 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:14:53.711520 | orchestrator | 2026-03-19 04:14:53.711532 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-19 04:14:53.711545 | orchestrator | Thursday 19 March 2026 04:14:45 +0000 (0:00:02.244) 0:08:17.267 ******** 2026-03-19 04:14:53.711592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-19 04:14:53.711615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-19 04:14:53.711647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-19 04:14:55.396165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:14:55.396300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:14:55.396319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-19 04:14:55.396362 | orchestrator | 2026-03-19 04:14:55.396375 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-19 04:14:55.396387 | orchestrator | Thursday 19 March 2026 04:14:53 +0000 (0:00:07.902) 0:08:25.170 ******** 2026-03-19 04:14:55.396451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-19 04:14:55.396460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:14:55.396467 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:14:55.396478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-19 04:14:55.396491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:14:55.396497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:14:55.396509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-19 04:15:16.811965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-19 04:15:16.812089 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812110 | orchestrator | 2026-03-19 04:15:16.812124 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-19 04:15:16.812138 | orchestrator | Thursday 19 March 2026 04:14:55 +0000 (0:00:01.688) 0:08:26.858 ******** 2026-03-19 04:15:16.812180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812246 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812287 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-19 04:15:16.812326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-19 04:15:16.812343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812351 | orchestrator | 2026-03-19 04:15:16.812359 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-19 04:15:16.812367 | orchestrator | Thursday 19 March 2026 04:14:57 +0000 (0:00:02.004) 0:08:28.863 ******** 2026-03-19 04:15:16.812383 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:15:16.812392 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:15:16.812399 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:15:16.812407 | orchestrator | 2026-03-19 04:15:16.812415 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-19 04:15:16.812423 | orchestrator | Thursday 19 March 2026 04:14:59 +0000 (0:00:02.305) 0:08:31.168 ******** 2026-03-19 04:15:16.812431 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:15:16.812439 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:15:16.812447 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:15:16.812454 | orchestrator | 2026-03-19 04:15:16.812462 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-19 04:15:16.812470 | orchestrator | Thursday 19 March 2026 04:15:02 +0000 (0:00:02.959) 0:08:34.128 ******** 2026-03-19 04:15:16.812478 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812486 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812493 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812501 | orchestrator | 2026-03-19 04:15:16.812511 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-19 04:15:16.812520 | orchestrator | Thursday 19 March 2026 04:15:04 +0000 (0:00:01.413) 0:08:35.542 ******** 2026-03-19 04:15:16.812529 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812538 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812547 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812555 | orchestrator | 2026-03-19 04:15:16.812564 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-19 04:15:16.812574 | orchestrator | Thursday 19 March 2026 04:15:05 +0000 (0:00:01.369) 0:08:36.911 ******** 2026-03-19 04:15:16.812642 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812652 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812671 | orchestrator | 2026-03-19 04:15:16.812680 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-19 04:15:16.812690 | orchestrator | Thursday 19 March 2026 04:15:07 +0000 (0:00:01.668) 0:08:38.579 ******** 2026-03-19 04:15:16.812699 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812708 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812717 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812726 | orchestrator | 2026-03-19 04:15:16.812736 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-19 04:15:16.812746 | orchestrator | Thursday 19 March 2026 04:15:08 +0000 (0:00:01.334) 0:08:39.914 ******** 2026-03-19 04:15:16.812755 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:16.812764 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:15:16.812772 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:15:16.812780 | orchestrator | 2026-03-19 04:15:16.812788 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-19 04:15:16.812796 | orchestrator | Thursday 19 March 2026 04:15:09 +0000 (0:00:01.378) 0:08:41.292 ******** 2026-03-19 04:15:16.812804 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:15:16.812813 | orchestrator | 2026-03-19 04:15:16.812821 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-19 04:15:16.812828 | orchestrator | Thursday 19 March 2026 04:15:12 +0000 (0:00:02.671) 0:08:43.963 ******** 2026-03-19 04:15:16.812838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-19 04:15:16.812861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-19 04:15:20.764822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-19 04:15:20.764923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:15:20.764954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:15:20.764966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-19 04:15:20.764978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:15:20.765010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:15:20.765039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-19 04:15:20.765051 | orchestrator | 2026-03-19 04:15:20.765063 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-19 04:15:20.765073 | orchestrator | Thursday 19 March 2026 04:15:16 +0000 (0:00:04.308) 0:08:48.272 ******** 2026-03-19 04:15:20.765084 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:15:20.765095 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:15:20.765105 | orchestrator | } 2026-03-19 04:15:20.765115 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:15:20.765125 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:15:20.765135 | orchestrator | } 2026-03-19 04:15:20.765144 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:15:20.765154 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:15:20.765163 | orchestrator | } 2026-03-19 04:15:20.765173 | orchestrator | 2026-03-19 04:15:20.765183 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:15:20.765193 | orchestrator | Thursday 19 March 2026 04:15:18 +0000 (0:00:01.502) 0:08:49.774 ******** 2026-03-19 04:15:20.765203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-19 04:15:20.765219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:15:20.765230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:15:20.765247 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:15:20.765257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-19 04:15:20.765268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:15:20.765286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:17:23.322164 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.322290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-19 04:17:23.322326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-19 04:17:23.322335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-19 04:17:23.322343 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.322370 | orchestrator | 2026-03-19 04:17:23.322378 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-19 04:17:23.322386 | orchestrator | Thursday 19 March 2026 04:15:20 +0000 (0:00:02.449) 0:08:52.224 ******** 2026-03-19 04:17:23.322392 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.322400 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.322406 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.322413 | orchestrator | 2026-03-19 04:17:23.322420 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-19 04:17:23.322427 | orchestrator | Thursday 19 March 2026 04:15:22 +0000 (0:00:01.796) 0:08:54.020 ******** 2026-03-19 04:17:23.322434 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.322441 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.322447 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.322454 | orchestrator | 2026-03-19 04:17:23.322460 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-19 04:17:23.322467 | orchestrator | Thursday 19 March 2026 04:15:23 +0000 (0:00:01.405) 0:08:55.426 ******** 2026-03-19 04:17:23.322474 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322481 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322487 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322494 | orchestrator | 2026-03-19 04:17:23.322500 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-19 04:17:23.322507 | orchestrator | Thursday 19 March 2026 04:15:31 +0000 (0:00:07.150) 0:09:02.577 ******** 2026-03-19 04:17:23.322514 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322520 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322527 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322534 | orchestrator | 2026-03-19 04:17:23.322540 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-19 04:17:23.322547 | orchestrator | Thursday 19 March 2026 04:15:38 +0000 (0:00:07.386) 0:09:09.963 ******** 2026-03-19 04:17:23.322554 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322560 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322567 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322573 | orchestrator | 2026-03-19 04:17:23.322580 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-19 04:17:23.322587 | orchestrator | Thursday 19 March 2026 04:15:45 +0000 (0:00:07.091) 0:09:17.056 ******** 2026-03-19 04:17:23.322593 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322600 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322607 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322613 | orchestrator | 2026-03-19 04:17:23.322620 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-19 04:17:23.322627 | orchestrator | Thursday 19 March 2026 04:15:53 +0000 (0:00:07.486) 0:09:24.543 ******** 2026-03-19 04:17:23.322633 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.322640 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.322647 | orchestrator | 2026-03-19 04:17:23.322654 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-19 04:17:23.322660 | orchestrator | Thursday 19 March 2026 04:15:56 +0000 (0:00:03.678) 0:09:28.222 ******** 2026-03-19 04:17:23.322667 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322674 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322680 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322687 | orchestrator | 2026-03-19 04:17:23.322708 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-19 04:17:23.322716 | orchestrator | Thursday 19 March 2026 04:16:10 +0000 (0:00:13.549) 0:09:41.771 ******** 2026-03-19 04:17:23.322724 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.322731 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.322739 | orchestrator | 2026-03-19 04:17:23.322747 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-19 04:17:23.322754 | orchestrator | Thursday 19 March 2026 04:16:14 +0000 (0:00:04.682) 0:09:46.454 ******** 2026-03-19 04:17:23.322803 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:23.322813 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:17:23.322821 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:17:23.322829 | orchestrator | 2026-03-19 04:17:23.322837 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-19 04:17:23.322844 | orchestrator | Thursday 19 March 2026 04:16:22 +0000 (0:00:07.149) 0:09:53.604 ******** 2026-03-19 04:17:23.322852 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.322860 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.322867 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.322875 | orchestrator | 2026-03-19 04:17:23.322882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-19 04:17:23.322890 | orchestrator | Thursday 19 March 2026 04:16:28 +0000 (0:00:06.831) 0:10:00.435 ******** 2026-03-19 04:17:23.322898 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.322906 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.322913 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.322921 | orchestrator | 2026-03-19 04:17:23.322929 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-19 04:17:23.322936 | orchestrator | Thursday 19 March 2026 04:16:35 +0000 (0:00:06.917) 0:10:07.353 ******** 2026-03-19 04:17:23.322944 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.322951 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.322959 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.322967 | orchestrator | 2026-03-19 04:17:23.322979 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-19 04:17:23.322987 | orchestrator | Thursday 19 March 2026 04:16:42 +0000 (0:00:06.858) 0:10:14.212 ******** 2026-03-19 04:17:23.322994 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.323000 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.323007 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.323013 | orchestrator | 2026-03-19 04:17:23.323020 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-19 04:17:23.323027 | orchestrator | Thursday 19 March 2026 04:16:50 +0000 (0:00:07.359) 0:10:21.571 ******** 2026-03-19 04:17:23.323034 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.323040 | orchestrator | 2026-03-19 04:17:23.323047 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-19 04:17:23.323054 | orchestrator | Thursday 19 March 2026 04:16:53 +0000 (0:00:03.564) 0:10:25.135 ******** 2026-03-19 04:17:23.323060 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.323067 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.323074 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.323080 | orchestrator | 2026-03-19 04:17:23.323087 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-19 04:17:23.323094 | orchestrator | Thursday 19 March 2026 04:17:06 +0000 (0:00:12.984) 0:10:38.119 ******** 2026-03-19 04:17:23.323101 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.323108 | orchestrator | 2026-03-19 04:17:23.323115 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-19 04:17:23.323121 | orchestrator | Thursday 19 March 2026 04:17:11 +0000 (0:00:04.610) 0:10:42.729 ******** 2026-03-19 04:17:23.323128 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:23.323134 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:23.323141 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:17:23.323148 | orchestrator | 2026-03-19 04:17:23.323154 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-19 04:17:23.323161 | orchestrator | Thursday 19 March 2026 04:17:18 +0000 (0:00:07.098) 0:10:49.828 ******** 2026-03-19 04:17:23.323168 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.323174 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.323181 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.323188 | orchestrator | 2026-03-19 04:17:23.323194 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-19 04:17:23.323206 | orchestrator | Thursday 19 March 2026 04:17:20 +0000 (0:00:01.938) 0:10:51.766 ******** 2026-03-19 04:17:23.323213 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:23.323219 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:23.323226 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:23.323233 | orchestrator | 2026-03-19 04:17:23.323239 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:17:23.323247 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-19 04:17:23.323256 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-19 04:17:23.323262 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-19 04:17:23.323269 | orchestrator | 2026-03-19 04:17:23.323276 | orchestrator | 2026-03-19 04:17:23.323282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:17:23.323289 | orchestrator | Thursday 19 March 2026 04:17:23 +0000 (0:00:03.004) 0:10:54.770 ******** 2026-03-19 04:17:23.323296 | orchestrator | =============================================================================== 2026-03-19 04:17:23.323302 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.55s 2026-03-19 04:17:23.323309 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.98s 2026-03-19 04:17:23.323316 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.90s 2026-03-19 04:17:23.323327 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.49s 2026-03-19 04:17:24.086657 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.39s 2026-03-19 04:17:24.086760 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.36s 2026-03-19 04:17:24.086834 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.15s 2026-03-19 04:17:24.086847 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.15s 2026-03-19 04:17:24.086858 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.10s 2026-03-19 04:17:24.086870 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.09s 2026-03-19 04:17:24.086881 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.92s 2026-03-19 04:17:24.086893 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.86s 2026-03-19 04:17:24.086927 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.83s 2026-03-19 04:17:24.086938 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.83s 2026-03-19 04:17:24.086949 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.79s 2026-03-19 04:17:24.086960 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.84s 2026-03-19 04:17:24.086971 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.70s 2026-03-19 04:17:24.086982 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.51s 2026-03-19 04:17:24.086993 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.37s 2026-03-19 04:17:24.087004 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.18s 2026-03-19 04:17:24.374277 | orchestrator | + osism apply -a upgrade opensearch 2026-03-19 04:17:26.430915 | orchestrator | 2026-03-19 04:17:26 | INFO  | Task 10e78c98-6bfe-4a85-babf-dde4363537d7 (opensearch) was prepared for execution. 2026-03-19 04:17:26.431013 | orchestrator | 2026-03-19 04:17:26 | INFO  | It takes a moment until task 10e78c98-6bfe-4a85-babf-dde4363537d7 (opensearch) has been started and output is visible here. 2026-03-19 04:17:44.555318 | orchestrator | 2026-03-19 04:17:44.555433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:17:44.555473 | orchestrator | 2026-03-19 04:17:44.555485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:17:44.555494 | orchestrator | Thursday 19 March 2026 04:17:31 +0000 (0:00:01.366) 0:00:01.366 ******** 2026-03-19 04:17:44.555504 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:17:44.555514 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:17:44.555524 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:17:44.555533 | orchestrator | 2026-03-19 04:17:44.555542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:17:44.555553 | orchestrator | Thursday 19 March 2026 04:17:33 +0000 (0:00:01.769) 0:00:03.135 ******** 2026-03-19 04:17:44.555564 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-19 04:17:44.555574 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-19 04:17:44.555584 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-19 04:17:44.555595 | orchestrator | 2026-03-19 04:17:44.555605 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-19 04:17:44.555615 | orchestrator | 2026-03-19 04:17:44.555625 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 04:17:44.555635 | orchestrator | Thursday 19 March 2026 04:17:35 +0000 (0:00:01.952) 0:00:05.088 ******** 2026-03-19 04:17:44.555646 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:17:44.555656 | orchestrator | 2026-03-19 04:17:44.555680 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-19 04:17:44.555688 | orchestrator | Thursday 19 March 2026 04:17:38 +0000 (0:00:03.021) 0:00:08.110 ******** 2026-03-19 04:17:44.555697 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 04:17:44.555705 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 04:17:44.555713 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-19 04:17:44.555722 | orchestrator | 2026-03-19 04:17:44.555732 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-19 04:17:44.555742 | orchestrator | Thursday 19 March 2026 04:17:40 +0000 (0:00:01.949) 0:00:10.059 ******** 2026-03-19 04:17:44.555755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:44.555768 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:44.555845 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:44.555861 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:44.555873 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:44.555886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:44.555900 | orchestrator | 2026-03-19 04:17:44.555909 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 04:17:44.555919 | orchestrator | Thursday 19 March 2026 04:17:42 +0000 (0:00:02.255) 0:00:12.315 ******** 2026-03-19 04:17:44.555928 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:17:44.555941 | orchestrator | 2026-03-19 04:17:44.555963 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-19 04:17:49.722168 | orchestrator | Thursday 19 March 2026 04:17:44 +0000 (0:00:01.658) 0:00:13.974 ******** 2026-03-19 04:17:49.722294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:49.722317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:49.722331 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:49.722363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:49.723207 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:49.723232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:49.723245 | orchestrator | 2026-03-19 04:17:49.723258 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-19 04:17:49.723270 | orchestrator | Thursday 19 March 2026 04:17:47 +0000 (0:00:03.436) 0:00:17.411 ******** 2026-03-19 04:17:49.723281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:49.723319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:51.541991 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:51.542133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:51.542148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:51.542176 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:51.542183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:51.542219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:51.542228 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:51.542234 | orchestrator | 2026-03-19 04:17:51.542242 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-19 04:17:51.542250 | orchestrator | Thursday 19 March 2026 04:17:49 +0000 (0:00:01.737) 0:00:19.148 ******** 2026-03-19 04:17:51.542257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:51.542264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:51.542277 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:17:51.542287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:51.542300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:55.232312 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:17:55.232459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:17:55.232485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:17:55.232525 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:17:55.232538 | orchestrator | 2026-03-19 04:17:55.232550 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-19 04:17:55.232562 | orchestrator | Thursday 19 March 2026 04:17:51 +0000 (0:00:01.817) 0:00:20.965 ******** 2026-03-19 04:17:55.232589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:55.232623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:55.232636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:17:55.232658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:55.232676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:17:55.232699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:18:08.692476 | orchestrator | 2026-03-19 04:18:08.692577 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-19 04:18:08.692590 | orchestrator | Thursday 19 March 2026 04:17:55 +0000 (0:00:03.692) 0:00:24.658 ******** 2026-03-19 04:18:08.692599 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:18:08.692608 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:18:08.692616 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:18:08.692661 | orchestrator | 2026-03-19 04:18:08.692671 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-19 04:18:08.692679 | orchestrator | Thursday 19 March 2026 04:17:58 +0000 (0:00:03.307) 0:00:27.965 ******** 2026-03-19 04:18:08.692696 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:18:08.692704 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:18:08.692712 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:18:08.692719 | orchestrator | 2026-03-19 04:18:08.692727 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-19 04:18:08.692736 | orchestrator | Thursday 19 March 2026 04:18:01 +0000 (0:00:03.192) 0:00:31.158 ******** 2026-03-19 04:18:08.692746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:18:08.692770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:18:08.692779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-19 04:18:08.692805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:18:08.692872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:18:08.692889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-19 04:18:08.692898 | orchestrator | 2026-03-19 04:18:08.692907 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-19 04:18:08.692915 | orchestrator | Thursday 19 March 2026 04:18:05 +0000 (0:00:03.526) 0:00:34.685 ******** 2026-03-19 04:18:08.692924 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:18:08.692932 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:18:08.692940 | orchestrator | } 2026-03-19 04:18:08.692948 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:18:08.692956 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:18:08.692964 | orchestrator | } 2026-03-19 04:18:08.692972 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:18:08.692980 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:18:08.692988 | orchestrator | } 2026-03-19 04:18:08.692996 | orchestrator | 2026-03-19 04:18:08.693004 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:18:08.693012 | orchestrator | Thursday 19 March 2026 04:18:06 +0000 (0:00:01.412) 0:00:36.097 ******** 2026-03-19 04:18:08.693036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:21:41.027910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:21:41.028052 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:21:41.028212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:21:41.028233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:21:41.028270 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:21:41.028298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-19 04:21:41.028310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-19 04:21:41.028321 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:21:41.028330 | orchestrator | 2026-03-19 04:21:41.028341 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 04:21:41.028352 | orchestrator | Thursday 19 March 2026 04:18:08 +0000 (0:00:02.018) 0:00:38.115 ******** 2026-03-19 04:21:41.028362 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:21:41.028371 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:21:41.028381 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:21:41.028390 | orchestrator | 2026-03-19 04:21:41.028400 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 04:21:41.028409 | orchestrator | Thursday 19 March 2026 04:18:10 +0000 (0:00:01.471) 0:00:39.586 ******** 2026-03-19 04:21:41.028419 | orchestrator | 2026-03-19 04:21:41.028429 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 04:21:41.028438 | orchestrator | Thursday 19 March 2026 04:18:10 +0000 (0:00:00.428) 0:00:40.015 ******** 2026-03-19 04:21:41.028450 | orchestrator | 2026-03-19 04:21:41.028467 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-19 04:21:41.028479 | orchestrator | Thursday 19 March 2026 04:18:10 +0000 (0:00:00.420) 0:00:40.435 ******** 2026-03-19 04:21:41.028491 | orchestrator | 2026-03-19 04:21:41.028502 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-19 04:21:41.028513 | orchestrator | Thursday 19 March 2026 04:18:11 +0000 (0:00:00.788) 0:00:41.223 ******** 2026-03-19 04:21:41.028524 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:21:41.028550 | orchestrator | 2026-03-19 04:21:41.028562 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-19 04:21:41.028574 | orchestrator | Thursday 19 March 2026 04:18:15 +0000 (0:00:03.830) 0:00:45.053 ******** 2026-03-19 04:21:41.028586 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:21:41.028603 | orchestrator | 2026-03-19 04:21:41.028619 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-19 04:21:41.028646 | orchestrator | Thursday 19 March 2026 04:18:24 +0000 (0:00:09.313) 0:00:54.367 ******** 2026-03-19 04:21:41.028664 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:21:41.028680 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:21:41.028696 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:21:41.028713 | orchestrator | 2026-03-19 04:21:41.028729 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-19 04:21:41.028744 | orchestrator | Thursday 19 March 2026 04:19:46 +0000 (0:01:22.061) 0:02:16.429 ******** 2026-03-19 04:21:41.028758 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:21:41.028774 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:21:41.028790 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:21:41.028806 | orchestrator | 2026-03-19 04:21:41.028823 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-19 04:21:41.028836 | orchestrator | Thursday 19 March 2026 04:21:31 +0000 (0:01:44.150) 0:04:00.579 ******** 2026-03-19 04:21:41.028847 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:21:41.028857 | orchestrator | 2026-03-19 04:21:41.028867 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-19 04:21:41.028877 | orchestrator | Thursday 19 March 2026 04:21:32 +0000 (0:00:01.704) 0:04:02.284 ******** 2026-03-19 04:21:41.028886 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:21:41.028896 | orchestrator | 2026-03-19 04:21:41.028905 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-19 04:21:41.028915 | orchestrator | Thursday 19 March 2026 04:21:36 +0000 (0:00:03.511) 0:04:05.796 ******** 2026-03-19 04:21:41.028924 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:21:41.028934 | orchestrator | 2026-03-19 04:21:41.028944 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-19 04:21:41.028953 | orchestrator | Thursday 19 March 2026 04:21:39 +0000 (0:00:03.430) 0:04:09.227 ******** 2026-03-19 04:21:41.028963 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:21:41.028973 | orchestrator | 2026-03-19 04:21:41.028982 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-19 04:21:41.029001 | orchestrator | Thursday 19 March 2026 04:21:41 +0000 (0:00:01.218) 0:04:10.445 ******** 2026-03-19 04:21:43.339679 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:21:43.339781 | orchestrator | 2026-03-19 04:21:43.339797 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:21:43.339813 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:21:43.339827 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:21:43.339838 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:21:43.339851 | orchestrator | 2026-03-19 04:21:43.339864 | orchestrator | 2026-03-19 04:21:43.339878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:21:43.339892 | orchestrator | Thursday 19 March 2026 04:21:42 +0000 (0:00:01.937) 0:04:12.382 ******** 2026-03-19 04:21:43.339905 | orchestrator | =============================================================================== 2026-03-19 04:21:43.339919 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------ 104.15s 2026-03-19 04:21:43.339955 | orchestrator | opensearch : Restart opensearch container ------------------------------ 82.06s 2026-03-19 04:21:43.339964 | orchestrator | opensearch : Perform a flush -------------------------------------------- 9.31s 2026-03-19 04:21:43.339972 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.83s 2026-03-19 04:21:43.339981 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.69s 2026-03-19 04:21:43.339995 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.53s 2026-03-19 04:21:43.340008 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.51s 2026-03-19 04:21:43.340021 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.44s 2026-03-19 04:21:43.340034 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.43s 2026-03-19 04:21:43.340046 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.31s 2026-03-19 04:21:43.340142 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.19s 2026-03-19 04:21:43.340159 | orchestrator | opensearch : include_tasks ---------------------------------------------- 3.02s 2026-03-19 04:21:43.340173 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.26s 2026-03-19 04:21:43.340186 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-03-19 04:21:43.340215 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2026-03-19 04:21:43.340225 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.95s 2026-03-19 04:21:43.340235 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.94s 2026-03-19 04:21:43.340249 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.82s 2026-03-19 04:21:43.340262 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.77s 2026-03-19 04:21:43.340277 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.74s 2026-03-19 04:21:43.621354 | orchestrator | + osism apply -a upgrade memcached 2026-03-19 04:21:45.655862 | orchestrator | 2026-03-19 04:21:45 | INFO  | Task fa7884d2-1474-4ec1-83c2-47680b8a7fa9 (memcached) was prepared for execution. 2026-03-19 04:21:45.655990 | orchestrator | 2026-03-19 04:21:45 | INFO  | It takes a moment until task fa7884d2-1474-4ec1-83c2-47680b8a7fa9 (memcached) has been started and output is visible here. 2026-03-19 04:22:20.260575 | orchestrator | 2026-03-19 04:22:20.260654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:22:20.260661 | orchestrator | 2026-03-19 04:22:20.260665 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:22:20.260670 | orchestrator | Thursday 19 March 2026 04:21:52 +0000 (0:00:02.345) 0:00:02.345 ******** 2026-03-19 04:22:20.260675 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:22:20.260682 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:22:20.260688 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:22:20.260692 | orchestrator | 2026-03-19 04:22:20.260696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:22:20.260701 | orchestrator | Thursday 19 March 2026 04:21:54 +0000 (0:00:02.178) 0:00:04.524 ******** 2026-03-19 04:22:20.260705 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-19 04:22:20.260710 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-19 04:22:20.260714 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-19 04:22:20.260718 | orchestrator | 2026-03-19 04:22:20.260722 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-19 04:22:20.260726 | orchestrator | 2026-03-19 04:22:20.260730 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-19 04:22:20.260734 | orchestrator | Thursday 19 March 2026 04:21:57 +0000 (0:00:03.580) 0:00:08.104 ******** 2026-03-19 04:22:20.260739 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:22:20.260757 | orchestrator | 2026-03-19 04:22:20.260761 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-19 04:22:20.260765 | orchestrator | Thursday 19 March 2026 04:21:59 +0000 (0:00:01.775) 0:00:09.879 ******** 2026-03-19 04:22:20.260769 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-19 04:22:20.260773 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-19 04:22:20.260777 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-19 04:22:20.260781 | orchestrator | 2026-03-19 04:22:20.260784 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-19 04:22:20.260788 | orchestrator | Thursday 19 March 2026 04:22:01 +0000 (0:00:01.795) 0:00:11.674 ******** 2026-03-19 04:22:20.260792 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-19 04:22:20.260796 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-19 04:22:20.260800 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-19 04:22:20.260812 | orchestrator | 2026-03-19 04:22:20.260816 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-19 04:22:20.260819 | orchestrator | Thursday 19 March 2026 04:22:04 +0000 (0:00:02.685) 0:00:14.360 ******** 2026-03-19 04:22:20.260826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:22:20.260838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:22:20.260851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-19 04:22:20.260855 | orchestrator | 2026-03-19 04:22:20.260859 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-19 04:22:20.260863 | orchestrator | Thursday 19 March 2026 04:22:06 +0000 (0:00:02.317) 0:00:16.677 ******** 2026-03-19 04:22:20.260867 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:22:20.260871 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:22:20.260878 | orchestrator | } 2026-03-19 04:22:20.260883 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:22:20.260886 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:22:20.260890 | orchestrator | } 2026-03-19 04:22:20.260894 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:22:20.260897 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:22:20.260901 | orchestrator | } 2026-03-19 04:22:20.260905 | orchestrator | 2026-03-19 04:22:20.260909 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:22:20.260912 | orchestrator | Thursday 19 March 2026 04:22:07 +0000 (0:00:01.343) 0:00:18.021 ******** 2026-03-19 04:22:20.260916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:22:20.260921 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:22:20.260925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:22:20.260928 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:22:20.260932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-19 04:22:20.260936 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:22:20.260940 | orchestrator | 2026-03-19 04:22:20.260944 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-19 04:22:20.260949 | orchestrator | Thursday 19 March 2026 04:22:09 +0000 (0:00:01.831) 0:00:19.852 ******** 2026-03-19 04:22:20.260953 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:22:20.260957 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:22:20.260961 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:22:20.260964 | orchestrator | 2026-03-19 04:22:20.260968 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:22:20.260973 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:22:20.260978 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:22:20.260987 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:22:20.260991 | orchestrator | 2026-03-19 04:22:20.260994 | orchestrator | 2026-03-19 04:22:20.260998 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:22:20.261004 | orchestrator | Thursday 19 March 2026 04:22:20 +0000 (0:00:10.618) 0:00:30.471 ******** 2026-03-19 04:22:20.484327 | orchestrator | =============================================================================== 2026-03-19 04:22:20.484435 | orchestrator | memcached : Restart memcached container -------------------------------- 10.62s 2026-03-19 04:22:20.484450 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.58s 2026-03-19 04:22:20.484462 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.69s 2026-03-19 04:22:20.484474 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.32s 2026-03-19 04:22:20.484485 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.18s 2026-03-19 04:22:20.484496 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.83s 2026-03-19 04:22:20.484507 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.80s 2026-03-19 04:22:20.484518 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.77s 2026-03-19 04:22:20.484529 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.34s 2026-03-19 04:22:20.672030 | orchestrator | + osism apply -a upgrade redis 2026-03-19 04:22:22.121693 | orchestrator | 2026-03-19 04:22:22 | INFO  | Task 8ab4f5c9-bcd7-4c80-8398-bd58debc38b2 (redis) was prepared for execution. 2026-03-19 04:22:22.121796 | orchestrator | 2026-03-19 04:22:22 | INFO  | It takes a moment until task 8ab4f5c9-bcd7-4c80-8398-bd58debc38b2 (redis) has been started and output is visible here. 2026-03-19 04:22:40.308098 | orchestrator | 2026-03-19 04:22:40.308229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:22:40.308239 | orchestrator | 2026-03-19 04:22:40.308245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:22:40.308251 | orchestrator | Thursday 19 March 2026 04:22:27 +0000 (0:00:01.741) 0:00:01.741 ******** 2026-03-19 04:22:40.308256 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:22:40.308261 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:22:40.308266 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:22:40.308271 | orchestrator | 2026-03-19 04:22:40.308276 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:22:40.308280 | orchestrator | Thursday 19 March 2026 04:22:29 +0000 (0:00:01.568) 0:00:03.310 ******** 2026-03-19 04:22:40.308285 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-19 04:22:40.308290 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-19 04:22:40.308295 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-19 04:22:40.308300 | orchestrator | 2026-03-19 04:22:40.308305 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-19 04:22:40.308309 | orchestrator | 2026-03-19 04:22:40.308314 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-19 04:22:40.308318 | orchestrator | Thursday 19 March 2026 04:22:32 +0000 (0:00:03.824) 0:00:07.135 ******** 2026-03-19 04:22:40.308323 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:22:40.308329 | orchestrator | 2026-03-19 04:22:40.308333 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-19 04:22:40.308338 | orchestrator | Thursday 19 March 2026 04:22:35 +0000 (0:00:02.171) 0:00:09.306 ******** 2026-03-19 04:22:40.308345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308378 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308389 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308407 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308412 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308417 | orchestrator | 2026-03-19 04:22:40.308422 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-19 04:22:40.308431 | orchestrator | Thursday 19 March 2026 04:22:37 +0000 (0:00:02.176) 0:00:11.483 ******** 2026-03-19 04:22:40.308436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308441 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308446 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:40.308460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.318875 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.318974 | orchestrator | 2026-03-19 04:22:47.318981 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-19 04:22:47.318987 | orchestrator | Thursday 19 March 2026 04:22:40 +0000 (0:00:03.083) 0:00:14.566 ******** 2026-03-19 04:22:47.319019 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319028 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319032 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319036 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319040 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319054 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319062 | orchestrator | 2026-03-19 04:22:47.319066 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-19 04:22:47.319070 | orchestrator | Thursday 19 March 2026 04:22:44 +0000 (0:00:03.853) 0:00:18.420 ******** 2026-03-19 04:22:47.319074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:22:47.319103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-19 04:23:14.219052 | orchestrator | 2026-03-19 04:23:14.219145 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-19 04:23:14.219156 | orchestrator | Thursday 19 March 2026 04:22:47 +0000 (0:00:03.160) 0:00:21.580 ******** 2026-03-19 04:23:14.219164 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:23:14.219171 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:14.219178 | orchestrator | } 2026-03-19 04:23:14.219184 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:23:14.219191 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:14.219218 | orchestrator | } 2026-03-19 04:23:14.219225 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:23:14.219232 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:14.219238 | orchestrator | } 2026-03-19 04:23:14.219244 | orchestrator | 2026-03-19 04:23:14.219251 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:23:14.219257 | orchestrator | Thursday 19 March 2026 04:22:48 +0000 (0:00:01.545) 0:00:23.126 ******** 2026-03-19 04:23:14.219265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219294 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:23:14.219301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219332 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:14.219339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-19 04:23:14.219385 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:14.219395 | orchestrator | 2026-03-19 04:23:14.219405 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 04:23:14.219415 | orchestrator | Thursday 19 March 2026 04:22:50 +0000 (0:00:01.808) 0:00:24.935 ******** 2026-03-19 04:23:14.219424 | orchestrator | 2026-03-19 04:23:14.219433 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 04:23:14.219442 | orchestrator | Thursday 19 March 2026 04:22:51 +0000 (0:00:00.465) 0:00:25.400 ******** 2026-03-19 04:23:14.219450 | orchestrator | 2026-03-19 04:23:14.219459 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-19 04:23:14.219468 | orchestrator | Thursday 19 March 2026 04:22:51 +0000 (0:00:00.416) 0:00:25.816 ******** 2026-03-19 04:23:14.219477 | orchestrator | 2026-03-19 04:23:14.219486 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-19 04:23:14.219495 | orchestrator | Thursday 19 March 2026 04:22:52 +0000 (0:00:00.766) 0:00:26.583 ******** 2026-03-19 04:23:14.219504 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:23:14.219513 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:23:14.219522 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:23:14.219531 | orchestrator | 2026-03-19 04:23:14.219540 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-19 04:23:14.219556 | orchestrator | Thursday 19 March 2026 04:23:02 +0000 (0:00:10.660) 0:00:37.244 ******** 2026-03-19 04:23:14.219565 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:23:14.219574 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:23:14.219584 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:23:14.219593 | orchestrator | 2026-03-19 04:23:14.219603 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:23:14.219615 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:23:14.219627 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:23:14.219638 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:23:14.219648 | orchestrator | 2026-03-19 04:23:14.219658 | orchestrator | 2026-03-19 04:23:14.219668 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:23:14.219688 | orchestrator | Thursday 19 March 2026 04:23:13 +0000 (0:00:10.860) 0:00:48.105 ******** 2026-03-19 04:23:14.219699 | orchestrator | =============================================================================== 2026-03-19 04:23:14.219709 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.86s 2026-03-19 04:23:14.219719 | orchestrator | redis : Restart redis container ---------------------------------------- 10.66s 2026-03-19 04:23:14.219730 | orchestrator | redis : Copying over redis config files --------------------------------- 3.85s 2026-03-19 04:23:14.219740 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.82s 2026-03-19 04:23:14.219751 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.16s 2026-03-19 04:23:14.219760 | orchestrator | redis : Copying over default config.json files -------------------------- 3.08s 2026-03-19 04:23:14.219770 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.18s 2026-03-19 04:23:14.219780 | orchestrator | redis : include_tasks --------------------------------------------------- 2.17s 2026-03-19 04:23:14.219790 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.81s 2026-03-19 04:23:14.219801 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.65s 2026-03-19 04:23:14.219812 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.57s 2026-03-19 04:23:14.219821 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.55s 2026-03-19 04:23:14.580615 | orchestrator | + osism apply -a upgrade mariadb 2026-03-19 04:23:16.619620 | orchestrator | 2026-03-19 04:23:16 | INFO  | Task c949a5da-7b8b-494e-a716-c10091e1eb09 (mariadb) was prepared for execution. 2026-03-19 04:23:16.619726 | orchestrator | 2026-03-19 04:23:16 | INFO  | It takes a moment until task c949a5da-7b8b-494e-a716-c10091e1eb09 (mariadb) has been started and output is visible here. 2026-03-19 04:23:29.019455 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-19 04:23:29.019577 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-19 04:23:29.019609 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-19 04:23:29.019620 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-19 04:23:29.019642 | orchestrator | 2026-03-19 04:23:29.019655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:23:29.019666 | orchestrator | 2026-03-19 04:23:29.019677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:23:29.019689 | orchestrator | Thursday 19 March 2026 04:23:21 +0000 (0:00:00.862) 0:00:00.862 ******** 2026-03-19 04:23:29.019700 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:23:29.019712 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:23:29.019723 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:23:29.019733 | orchestrator | 2026-03-19 04:23:29.019745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:23:29.019756 | orchestrator | Thursday 19 March 2026 04:23:22 +0000 (0:00:00.696) 0:00:01.558 ******** 2026-03-19 04:23:29.019767 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-19 04:23:29.019778 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-19 04:23:29.019789 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-19 04:23:29.019801 | orchestrator | 2026-03-19 04:23:29.019812 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-19 04:23:29.019823 | orchestrator | 2026-03-19 04:23:29.019835 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-19 04:23:29.019846 | orchestrator | Thursday 19 March 2026 04:23:23 +0000 (0:00:00.922) 0:00:02.480 ******** 2026-03-19 04:23:29.019881 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:23:29.019893 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:23:29.019904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:23:29.019915 | orchestrator | 2026-03-19 04:23:29.019926 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 04:23:29.019937 | orchestrator | Thursday 19 March 2026 04:23:23 +0000 (0:00:00.340) 0:00:02.821 ******** 2026-03-19 04:23:29.019962 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:23:29.019975 | orchestrator | 2026-03-19 04:23:29.019988 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-19 04:23:29.020001 | orchestrator | Thursday 19 March 2026 04:23:24 +0000 (0:00:01.294) 0:00:04.116 ******** 2026-03-19 04:23:29.020021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:29.020059 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:29.020091 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:29.020105 | orchestrator | 2026-03-19 04:23:29.020118 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-19 04:23:29.020131 | orchestrator | Thursday 19 March 2026 04:23:27 +0000 (0:00:02.530) 0:00:06.647 ******** 2026-03-19 04:23:29.020144 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:29.020158 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:29.020171 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:23:29.020184 | orchestrator | 2026-03-19 04:23:29.020196 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-19 04:23:29.020209 | orchestrator | Thursday 19 March 2026 04:23:27 +0000 (0:00:00.534) 0:00:07.181 ******** 2026-03-19 04:23:29.020310 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:29.020323 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:29.020336 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:23:29.020348 | orchestrator | 2026-03-19 04:23:29.020359 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-19 04:23:29.020378 | orchestrator | Thursday 19 March 2026 04:23:29 +0000 (0:00:01.175) 0:00:08.357 ******** 2026-03-19 04:23:40.055352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:40.055532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:40.055576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:40.055599 | orchestrator | 2026-03-19 04:23:40.055613 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-19 04:23:40.055626 | orchestrator | Thursday 19 March 2026 04:23:32 +0000 (0:00:03.145) 0:00:11.503 ******** 2026-03-19 04:23:40.055638 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:40.055650 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:40.055661 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:23:40.055673 | orchestrator | 2026-03-19 04:23:40.055692 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-19 04:23:40.055704 | orchestrator | Thursday 19 March 2026 04:23:33 +0000 (0:00:01.054) 0:00:12.557 ******** 2026-03-19 04:23:40.055715 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:23:40.055726 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:23:40.055737 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:23:40.055748 | orchestrator | 2026-03-19 04:23:40.055759 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 04:23:40.055770 | orchestrator | Thursday 19 March 2026 04:23:36 +0000 (0:00:03.450) 0:00:16.007 ******** 2026-03-19 04:23:40.055785 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:23:40.055799 | orchestrator | 2026-03-19 04:23:40.055812 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-19 04:23:40.055824 | orchestrator | Thursday 19 March 2026 04:23:37 +0000 (0:00:01.047) 0:00:17.055 ******** 2026-03-19 04:23:40.055846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:42.389194 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:23:42.389425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:42.389451 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:42.389464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:42.389498 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:42.389510 | orchestrator | 2026-03-19 04:23:42.389522 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-19 04:23:42.389534 | orchestrator | Thursday 19 March 2026 04:23:40 +0000 (0:00:02.339) 0:00:19.395 ******** 2026-03-19 04:23:42.389574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:42.389589 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:23:42.389601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:42.389619 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:42.389641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:48.580838 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:48.580946 | orchestrator | 2026-03-19 04:23:48.580963 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-19 04:23:48.580977 | orchestrator | Thursday 19 March 2026 04:23:42 +0000 (0:00:02.334) 0:00:21.729 ******** 2026-03-19 04:23:48.581007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:48.581044 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:48.581057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:48.581070 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:23:48.581106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:48.581120 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:48.581132 | orchestrator | 2026-03-19 04:23:48.581151 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-19 04:23:48.581162 | orchestrator | Thursday 19 March 2026 04:23:45 +0000 (0:00:03.239) 0:00:24.969 ******** 2026-03-19 04:23:48.581173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:48.581200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:51.815648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-19 04:23:51.815803 | orchestrator | 2026-03-19 04:23:51.815828 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-19 04:23:51.815846 | orchestrator | Thursday 19 March 2026 04:23:48 +0000 (0:00:02.955) 0:00:27.925 ******** 2026-03-19 04:23:51.815864 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:23:51.815883 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:51.815901 | orchestrator | } 2026-03-19 04:23:51.815918 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:23:51.815935 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:51.815947 | orchestrator | } 2026-03-19 04:23:51.815957 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:23:51.815966 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:23:51.815976 | orchestrator | } 2026-03-19 04:23:51.815986 | orchestrator | 2026-03-19 04:23:51.815996 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:23:51.816005 | orchestrator | Thursday 19 March 2026 04:23:48 +0000 (0:00:00.312) 0:00:28.238 ******** 2026-03-19 04:23:51.816052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:51.816075 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:23:51.816086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:51.816096 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:23:51.816112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:23:51.816129 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:23:51.816139 | orchestrator | 2026-03-19 04:23:51.816148 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-19 04:23:51.816172 | orchestrator | Thursday 19 March 2026 04:23:51 +0000 (0:00:02.907) 0:00:31.146 ******** 2026-03-19 04:24:00.912790 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.912936 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.912957 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.912969 | orchestrator | 2026-03-19 04:24:00.912982 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-19 04:24:00.912995 | orchestrator | Thursday 19 March 2026 04:23:52 +0000 (0:00:00.333) 0:00:31.479 ******** 2026-03-19 04:24:00.913006 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913017 | orchestrator | 2026-03-19 04:24:00.913028 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-19 04:24:00.913039 | orchestrator | Thursday 19 March 2026 04:23:52 +0000 (0:00:00.116) 0:00:31.596 ******** 2026-03-19 04:24:00.913052 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913063 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913074 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913085 | orchestrator | 2026-03-19 04:24:00.913096 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-19 04:24:00.913107 | orchestrator | Thursday 19 March 2026 04:23:52 +0000 (0:00:00.316) 0:00:31.913 ******** 2026-03-19 04:24:00.913118 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913129 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913140 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913151 | orchestrator | 2026-03-19 04:24:00.913162 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-19 04:24:00.913173 | orchestrator | Thursday 19 March 2026 04:23:53 +0000 (0:00:00.541) 0:00:32.454 ******** 2026-03-19 04:24:00.913190 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913207 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913226 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913272 | orchestrator | 2026-03-19 04:24:00.913290 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-19 04:24:00.913301 | orchestrator | Thursday 19 March 2026 04:23:53 +0000 (0:00:00.330) 0:00:32.785 ******** 2026-03-19 04:24:00.913312 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913324 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913337 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913350 | orchestrator | 2026-03-19 04:24:00.913363 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-19 04:24:00.913375 | orchestrator | Thursday 19 March 2026 04:23:53 +0000 (0:00:00.338) 0:00:33.123 ******** 2026-03-19 04:24:00.913389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913401 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913414 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913427 | orchestrator | 2026-03-19 04:24:00.913440 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-19 04:24:00.913453 | orchestrator | Thursday 19 March 2026 04:23:54 +0000 (0:00:00.311) 0:00:33.435 ******** 2026-03-19 04:24:00.913465 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913479 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913492 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913505 | orchestrator | 2026-03-19 04:24:00.913516 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-19 04:24:00.913526 | orchestrator | Thursday 19 March 2026 04:23:54 +0000 (0:00:00.512) 0:00:33.948 ******** 2026-03-19 04:24:00.913566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:24:00.913578 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:24:00.913589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:24:00.913602 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:24:00.913637 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:24:00.913654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:24:00.913671 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913686 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:24:00.913703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:24:00.913737 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:24:00.913754 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913770 | orchestrator | 2026-03-19 04:24:00.913787 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-19 04:24:00.913804 | orchestrator | Thursday 19 March 2026 04:23:54 +0000 (0:00:00.351) 0:00:34.300 ******** 2026-03-19 04:24:00.913822 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913840 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913857 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913875 | orchestrator | 2026-03-19 04:24:00.913893 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-19 04:24:00.913911 | orchestrator | Thursday 19 March 2026 04:23:55 +0000 (0:00:00.344) 0:00:34.644 ******** 2026-03-19 04:24:00.913926 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.913943 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.913960 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.913978 | orchestrator | 2026-03-19 04:24:00.913996 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-19 04:24:00.914086 | orchestrator | Thursday 19 March 2026 04:23:55 +0000 (0:00:00.496) 0:00:35.141 ******** 2026-03-19 04:24:00.914111 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914130 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914150 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914169 | orchestrator | 2026-03-19 04:24:00.914230 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-19 04:24:00.914296 | orchestrator | Thursday 19 March 2026 04:23:56 +0000 (0:00:00.335) 0:00:35.477 ******** 2026-03-19 04:24:00.914317 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914337 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914356 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914375 | orchestrator | 2026-03-19 04:24:00.914395 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-19 04:24:00.914441 | orchestrator | Thursday 19 March 2026 04:23:56 +0000 (0:00:00.338) 0:00:35.815 ******** 2026-03-19 04:24:00.914462 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914482 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914501 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914539 | orchestrator | 2026-03-19 04:24:00.914573 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-19 04:24:00.914593 | orchestrator | Thursday 19 March 2026 04:23:56 +0000 (0:00:00.337) 0:00:36.153 ******** 2026-03-19 04:24:00.914613 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914632 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914652 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914671 | orchestrator | 2026-03-19 04:24:00.914703 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-19 04:24:00.914721 | orchestrator | Thursday 19 March 2026 04:23:57 +0000 (0:00:00.596) 0:00:36.750 ******** 2026-03-19 04:24:00.914741 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914780 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914798 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914816 | orchestrator | 2026-03-19 04:24:00.914834 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-19 04:24:00.914852 | orchestrator | Thursday 19 March 2026 04:23:57 +0000 (0:00:00.330) 0:00:37.081 ******** 2026-03-19 04:24:00.914869 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:00.914888 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.914907 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:00.914926 | orchestrator | 2026-03-19 04:24:00.914946 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-19 04:24:00.914965 | orchestrator | Thursday 19 March 2026 04:23:58 +0000 (0:00:00.361) 0:00:37.443 ******** 2026-03-19 04:24:00.915003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:00.915029 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:00.915066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:03.416551 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:03.416640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:03.416653 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:03.416661 | orchestrator | 2026-03-19 04:24:03.416682 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-19 04:24:03.416690 | orchestrator | Thursday 19 March 2026 04:24:00 +0000 (0:00:02.811) 0:00:40.254 ******** 2026-03-19 04:24:03.416697 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:03.416704 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:03.416710 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:03.416717 | orchestrator | 2026-03-19 04:24:03.416724 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-19 04:24:03.416731 | orchestrator | Thursday 19 March 2026 04:24:01 +0000 (0:00:00.439) 0:00:40.693 ******** 2026-03-19 04:24:03.416750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:03.416775 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:24:03.416783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:03.416794 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:24:03.416802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-19 04:24:03.416814 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:24:03.416821 | orchestrator | 2026-03-19 04:24:03.416827 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-19 04:24:03.416834 | orchestrator | Thursday 19 March 2026 04:24:03 +0000 (0:00:01.903) 0:00:42.597 ******** 2026-03-19 04:24:03.416845 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.202170 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.202317 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.202453 | orchestrator | 2026-03-19 04:26:00.202476 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-19 04:26:00.202497 | orchestrator | Thursday 19 March 2026 04:24:03 +0000 (0:00:00.631) 0:00:43.228 ******** 2026-03-19 04:26:00.202515 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.202534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.202552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.202571 | orchestrator | 2026-03-19 04:26:00.202589 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-19 04:26:00.202611 | orchestrator | Thursday 19 March 2026 04:24:04 +0000 (0:00:00.412) 0:00:43.641 ******** 2026-03-19 04:26:00.202634 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.202658 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.202679 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.202701 | orchestrator | 2026-03-19 04:26:00.202722 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-19 04:26:00.202745 | orchestrator | Thursday 19 March 2026 04:24:04 +0000 (0:00:00.318) 0:00:43.959 ******** 2026-03-19 04:26:00.202767 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.202858 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.202884 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.202909 | orchestrator | 2026-03-19 04:26:00.202931 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-19 04:26:00.202952 | orchestrator | Thursday 19 March 2026 04:24:05 +0000 (0:00:00.863) 0:00:44.823 ******** 2026-03-19 04:26:00.202972 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.202992 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.203012 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.203032 | orchestrator | 2026-03-19 04:26:00.203053 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-19 04:26:00.203073 | orchestrator | Thursday 19 March 2026 04:24:06 +0000 (0:00:00.910) 0:00:45.734 ******** 2026-03-19 04:26:00.203093 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203115 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203135 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203155 | orchestrator | 2026-03-19 04:26:00.203176 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-19 04:26:00.203196 | orchestrator | Thursday 19 March 2026 04:24:07 +0000 (0:00:00.844) 0:00:46.578 ******** 2026-03-19 04:26:00.203217 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203237 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203257 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203278 | orchestrator | 2026-03-19 04:26:00.203297 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-19 04:26:00.203382 | orchestrator | Thursday 19 March 2026 04:24:07 +0000 (0:00:00.308) 0:00:46.887 ******** 2026-03-19 04:26:00.203405 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203426 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203446 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203468 | orchestrator | 2026-03-19 04:26:00.203508 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-19 04:26:00.203529 | orchestrator | Thursday 19 March 2026 04:24:07 +0000 (0:00:00.305) 0:00:47.192 ******** 2026-03-19 04:26:00.203547 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203566 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203584 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203602 | orchestrator | 2026-03-19 04:26:00.203621 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-19 04:26:00.203639 | orchestrator | Thursday 19 March 2026 04:24:08 +0000 (0:00:00.961) 0:00:48.154 ******** 2026-03-19 04:26:00.203657 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203676 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203694 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203714 | orchestrator | 2026-03-19 04:26:00.203732 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-19 04:26:00.203750 | orchestrator | Thursday 19 March 2026 04:24:09 +0000 (0:00:00.345) 0:00:48.499 ******** 2026-03-19 04:26:00.203768 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.203787 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.203805 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.203825 | orchestrator | 2026-03-19 04:26:00.203845 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-19 04:26:00.203865 | orchestrator | Thursday 19 March 2026 04:24:09 +0000 (0:00:00.341) 0:00:48.841 ******** 2026-03-19 04:26:00.203886 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.203906 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.203926 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.203946 | orchestrator | 2026-03-19 04:26:00.203965 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-19 04:26:00.203982 | orchestrator | Thursday 19 March 2026 04:24:11 +0000 (0:00:02.501) 0:00:51.342 ******** 2026-03-19 04:26:00.203998 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.204015 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.204033 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.204053 | orchestrator | 2026-03-19 04:26:00.204072 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-19 04:26:00.204091 | orchestrator | Thursday 19 March 2026 04:24:12 +0000 (0:00:00.560) 0:00:51.902 ******** 2026-03-19 04:26:00.204110 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.204128 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.204146 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.204163 | orchestrator | 2026-03-19 04:26:00.204179 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-19 04:26:00.204191 | orchestrator | Thursday 19 March 2026 04:24:12 +0000 (0:00:00.346) 0:00:52.249 ******** 2026-03-19 04:26:00.204201 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.204212 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.204223 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.204233 | orchestrator | 2026-03-19 04:26:00.204244 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 04:26:00.204255 | orchestrator | Thursday 19 March 2026 04:24:13 +0000 (0:00:00.711) 0:00:52.961 ******** 2026-03-19 04:26:00.204265 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.204276 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.204286 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.204321 | orchestrator | 2026-03-19 04:26:00.204399 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-19 04:26:00.204412 | orchestrator | Thursday 19 March 2026 04:24:14 +0000 (0:00:00.502) 0:00:53.464 ******** 2026-03-19 04:26:00.204442 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.204453 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-19 04:26:00.204464 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-19 04:26:00.204486 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.204497 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.204507 | orchestrator | 2026-03-19 04:26:00.204518 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-19 04:26:00.204529 | orchestrator | Thursday 19 March 2026 04:24:14 +0000 (0:00:00.781) 0:00:54.246 ******** 2026-03-19 04:26:00.204539 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:26:00.204550 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:26:00.204561 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:26:00.204571 | orchestrator | 2026-03-19 04:26:00.204582 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-19 04:26:00.204593 | orchestrator | Thursday 19 March 2026 04:24:15 +0000 (0:00:00.582) 0:00:54.829 ******** 2026-03-19 04:26:00.204604 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:00.204614 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:00.204625 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:00.204635 | orchestrator | 2026-03-19 04:26:00.204646 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 04:26:00.204657 | orchestrator | 2026-03-19 04:26:00.204667 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 04:26:00.204682 | orchestrator | Thursday 19 March 2026 04:24:16 +0000 (0:00:00.765) 0:00:55.594 ******** 2026-03-19 04:26:00.204699 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:26:00.204715 | orchestrator | 2026-03-19 04:26:00.204730 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 04:26:00.204745 | orchestrator | Thursday 19 March 2026 04:24:41 +0000 (0:00:25.167) 0:01:20.761 ******** 2026-03-19 04:26:00.204761 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2026-03-19 04:26:00.204778 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.204794 | orchestrator | 2026-03-19 04:26:00.204807 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 04:26:00.204821 | orchestrator | Thursday 19 March 2026 04:24:49 +0000 (0:00:08.283) 0:01:29.044 ******** 2026-03-19 04:26:00.204836 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:00.204854 | orchestrator | 2026-03-19 04:26:00.204870 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 04:26:00.204887 | orchestrator | 2026-03-19 04:26:00.204917 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 04:26:00.204935 | orchestrator | Thursday 19 March 2026 04:24:52 +0000 (0:00:02.716) 0:01:31.761 ******** 2026-03-19 04:26:00.204949 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:26:00.204958 | orchestrator | 2026-03-19 04:26:00.204968 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 04:26:00.204978 | orchestrator | Thursday 19 March 2026 04:25:17 +0000 (0:00:24.881) 0:01:56.642 ******** 2026-03-19 04:26:00.204987 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.204997 | orchestrator | 2026-03-19 04:26:00.205006 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 04:26:00.205016 | orchestrator | Thursday 19 March 2026 04:25:23 +0000 (0:00:06.649) 0:02:03.292 ******** 2026-03-19 04:26:00.205025 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:00.205035 | orchestrator | 2026-03-19 04:26:00.205044 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-19 04:26:00.205054 | orchestrator | 2026-03-19 04:26:00.205063 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-19 04:26:00.205073 | orchestrator | Thursday 19 March 2026 04:25:26 +0000 (0:00:02.998) 0:02:06.291 ******** 2026-03-19 04:26:00.205091 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:26:00.205101 | orchestrator | 2026-03-19 04:26:00.205111 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-19 04:26:00.205120 | orchestrator | Thursday 19 March 2026 04:25:50 +0000 (0:00:24.031) 0:02:30.323 ******** 2026-03-19 04:26:00.205130 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.205140 | orchestrator | 2026-03-19 04:26:00.205149 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-19 04:26:00.205158 | orchestrator | Thursday 19 March 2026 04:25:55 +0000 (0:00:04.744) 0:02:35.067 ******** 2026-03-19 04:26:00.205168 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-19 04:26:00.205178 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-19 04:26:00.205187 | orchestrator | mariadb_bootstrap_restart 2026-03-19 04:26:00.205197 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:00.205206 | orchestrator | 2026-03-19 04:26:00.205216 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-19 04:26:00.205225 | orchestrator | skipping: no hosts matched 2026-03-19 04:26:00.205235 | orchestrator | 2026-03-19 04:26:00.205244 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-19 04:26:00.205254 | orchestrator | skipping: no hosts matched 2026-03-19 04:26:00.205263 | orchestrator | 2026-03-19 04:26:00.205273 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-19 04:26:00.205283 | orchestrator | 2026-03-19 04:26:00.205292 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-19 04:26:00.205301 | orchestrator | Thursday 19 March 2026 04:25:59 +0000 (0:00:03.377) 0:02:38.444 ******** 2026-03-19 04:26:00.205311 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:26:00.205320 | orchestrator | 2026-03-19 04:26:00.205454 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-19 04:26:00.205501 | orchestrator | Thursday 19 March 2026 04:26:00 +0000 (0:00:01.084) 0:02:39.528 ******** 2026-03-19 04:26:39.836655 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.836767 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.836781 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:39.836793 | orchestrator | 2026-03-19 04:26:39.836804 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-19 04:26:39.836815 | orchestrator | Thursday 19 March 2026 04:26:02 +0000 (0:00:02.494) 0:02:42.023 ******** 2026-03-19 04:26:39.836826 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.836836 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.836845 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:26:39.836855 | orchestrator | 2026-03-19 04:26:39.836865 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-19 04:26:39.836875 | orchestrator | Thursday 19 March 2026 04:26:05 +0000 (0:00:02.552) 0:02:44.576 ******** 2026-03-19 04:26:39.836885 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.836895 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.836904 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:39.836914 | orchestrator | 2026-03-19 04:26:39.836924 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-19 04:26:39.836934 | orchestrator | Thursday 19 March 2026 04:26:07 +0000 (0:00:02.444) 0:02:47.020 ******** 2026-03-19 04:26:39.836944 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.836953 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.836963 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:26:39.836973 | orchestrator | 2026-03-19 04:26:39.836983 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-19 04:26:39.836992 | orchestrator | Thursday 19 March 2026 04:26:09 +0000 (0:00:02.307) 0:02:49.328 ******** 2026-03-19 04:26:39.837002 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:39.837012 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:39.837042 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:39.837053 | orchestrator | 2026-03-19 04:26:39.837062 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-19 04:26:39.837072 | orchestrator | Thursday 19 March 2026 04:26:15 +0000 (0:00:05.276) 0:02:54.604 ******** 2026-03-19 04:26:39.837082 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:39.837092 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.837101 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.837111 | orchestrator | 2026-03-19 04:26:39.837120 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-19 04:26:39.837130 | orchestrator | Thursday 19 March 2026 04:26:17 +0000 (0:00:02.339) 0:02:56.944 ******** 2026-03-19 04:26:39.837140 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:26:39.837149 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:26:39.837159 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:26:39.837170 | orchestrator | 2026-03-19 04:26:39.837181 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-19 04:26:39.837193 | orchestrator | Thursday 19 March 2026 04:26:18 +0000 (0:00:00.743) 0:02:57.687 ******** 2026-03-19 04:26:39.837203 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:26:39.837228 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:26:39.837240 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:26:39.837250 | orchestrator | 2026-03-19 04:26:39.837261 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-19 04:26:39.837272 | orchestrator | Thursday 19 March 2026 04:26:21 +0000 (0:00:02.771) 0:03:00.459 ******** 2026-03-19 04:26:39.837283 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:26:39.837294 | orchestrator | 2026-03-19 04:26:39.837305 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-19 04:26:39.837315 | orchestrator | Thursday 19 March 2026 04:26:22 +0000 (0:00:01.109) 0:03:01.568 ******** 2026-03-19 04:26:39.837326 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:26:39.837390 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:26:39.837402 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:26:39.837413 | orchestrator | 2026-03-19 04:26:39.837424 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:26:39.837436 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-19 04:26:39.837449 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-19 04:26:39.837460 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-19 04:26:39.837471 | orchestrator | 2026-03-19 04:26:39.837481 | orchestrator | 2026-03-19 04:26:39.837493 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:26:39.837504 | orchestrator | Thursday 19 March 2026 04:26:39 +0000 (0:00:17.226) 0:03:18.794 ******** 2026-03-19 04:26:39.837515 | orchestrator | =============================================================================== 2026-03-19 04:26:39.837525 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 74.08s 2026-03-19 04:26:39.837534 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 19.68s 2026-03-19 04:26:39.837544 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.23s 2026-03-19 04:26:39.837553 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.09s 2026-03-19 04:26:39.837563 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.28s 2026-03-19 04:26:39.837572 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.45s 2026-03-19 04:26:39.837582 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.24s 2026-03-19 04:26:39.837603 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.15s 2026-03-19 04:26:39.837612 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.96s 2026-03-19 04:26:39.837640 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.91s 2026-03-19 04:26:39.837651 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.81s 2026-03-19 04:26:39.837661 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.77s 2026-03-19 04:26:39.837670 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.55s 2026-03-19 04:26:39.837680 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.53s 2026-03-19 04:26:39.837689 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.50s 2026-03-19 04:26:39.837699 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.49s 2026-03-19 04:26:39.837708 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.44s 2026-03-19 04:26:39.837718 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.34s 2026-03-19 04:26:39.837728 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.34s 2026-03-19 04:26:39.837737 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.33s 2026-03-19 04:26:40.099008 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-19 04:26:42.112258 | orchestrator | 2026-03-19 04:26:42 | INFO  | Task 81b9a610-2940-4826-a8ff-0b55c0161c2c (rabbitmq) was prepared for execution. 2026-03-19 04:26:42.112468 | orchestrator | 2026-03-19 04:26:42 | INFO  | It takes a moment until task 81b9a610-2940-4826-a8ff-0b55c0161c2c (rabbitmq) has been started and output is visible here. 2026-03-19 04:27:25.168445 | orchestrator | 2026-03-19 04:27:25.168536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:27:25.168546 | orchestrator | 2026-03-19 04:27:25.168553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:27:25.168559 | orchestrator | Thursday 19 March 2026 04:26:47 +0000 (0:00:01.477) 0:00:01.477 ******** 2026-03-19 04:27:25.168565 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:25.168571 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:27:25.168577 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:27:25.168582 | orchestrator | 2026-03-19 04:27:25.168588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:27:25.168594 | orchestrator | Thursday 19 March 2026 04:26:49 +0000 (0:00:01.781) 0:00:03.258 ******** 2026-03-19 04:27:25.168600 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-19 04:27:25.168606 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-19 04:27:25.168611 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-19 04:27:25.168617 | orchestrator | 2026-03-19 04:27:25.168634 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-19 04:27:25.168639 | orchestrator | 2026-03-19 04:27:25.168645 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 04:27:25.168651 | orchestrator | Thursday 19 March 2026 04:26:50 +0000 (0:00:01.724) 0:00:04.982 ******** 2026-03-19 04:27:25.168656 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:27:25.168662 | orchestrator | 2026-03-19 04:27:25.168668 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 04:27:25.168673 | orchestrator | Thursday 19 March 2026 04:26:52 +0000 (0:00:02.113) 0:00:07.096 ******** 2026-03-19 04:27:25.168679 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:25.168685 | orchestrator | 2026-03-19 04:27:25.168690 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-19 04:27:25.168696 | orchestrator | Thursday 19 March 2026 04:26:55 +0000 (0:00:02.422) 0:00:09.519 ******** 2026-03-19 04:27:25.168701 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:25.168723 | orchestrator | 2026-03-19 04:27:25.168729 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-19 04:27:25.168734 | orchestrator | Thursday 19 March 2026 04:26:58 +0000 (0:00:03.307) 0:00:12.827 ******** 2026-03-19 04:27:25.168740 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:27:25.168746 | orchestrator | 2026-03-19 04:27:25.168754 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-19 04:27:25.168764 | orchestrator | Thursday 19 March 2026 04:27:09 +0000 (0:00:10.376) 0:00:23.203 ******** 2026-03-19 04:27:25.168772 | orchestrator | ok: [testbed-node-0] => { 2026-03-19 04:27:25.168780 | orchestrator |  "changed": false, 2026-03-19 04:27:25.168788 | orchestrator |  "msg": "All assertions passed" 2026-03-19 04:27:25.168797 | orchestrator | } 2026-03-19 04:27:25.168806 | orchestrator | 2026-03-19 04:27:25.168815 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-19 04:27:25.168823 | orchestrator | Thursday 19 March 2026 04:27:10 +0000 (0:00:01.284) 0:00:24.488 ******** 2026-03-19 04:27:25.168832 | orchestrator | ok: [testbed-node-0] => { 2026-03-19 04:27:25.168840 | orchestrator |  "changed": false, 2026-03-19 04:27:25.168849 | orchestrator |  "msg": "All assertions passed" 2026-03-19 04:27:25.168858 | orchestrator | } 2026-03-19 04:27:25.168867 | orchestrator | 2026-03-19 04:27:25.168876 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 04:27:25.168885 | orchestrator | Thursday 19 March 2026 04:27:12 +0000 (0:00:01.673) 0:00:26.162 ******** 2026-03-19 04:27:25.168894 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:27:25.168903 | orchestrator | 2026-03-19 04:27:25.168912 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-19 04:27:25.168920 | orchestrator | Thursday 19 March 2026 04:27:13 +0000 (0:00:01.639) 0:00:27.801 ******** 2026-03-19 04:27:25.168929 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:25.168938 | orchestrator | 2026-03-19 04:27:25.168947 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-19 04:27:25.168956 | orchestrator | Thursday 19 March 2026 04:27:15 +0000 (0:00:02.278) 0:00:30.080 ******** 2026-03-19 04:27:25.168964 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:25.168973 | orchestrator | 2026-03-19 04:27:25.168981 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-19 04:27:25.168990 | orchestrator | Thursday 19 March 2026 04:27:19 +0000 (0:00:03.188) 0:00:33.269 ******** 2026-03-19 04:27:25.168999 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:27:25.169008 | orchestrator | 2026-03-19 04:27:25.169018 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-19 04:27:25.169027 | orchestrator | Thursday 19 March 2026 04:27:20 +0000 (0:00:01.834) 0:00:35.103 ******** 2026-03-19 04:27:25.169061 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:25.169081 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:25.169103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:25.169113 | orchestrator | 2026-03-19 04:27:25.169122 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-19 04:27:25.169133 | orchestrator | Thursday 19 March 2026 04:27:22 +0000 (0:00:01.759) 0:00:36.863 ******** 2026-03-19 04:27:25.169142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:25.169160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:44.298880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:44.298989 | orchestrator | 2026-03-19 04:27:44.299004 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-19 04:27:44.299015 | orchestrator | Thursday 19 March 2026 04:27:25 +0000 (0:00:02.405) 0:00:39.268 ******** 2026-03-19 04:27:44.299024 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 04:27:44.299033 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 04:27:44.299042 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-19 04:27:44.299051 | orchestrator | 2026-03-19 04:27:44.299060 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-19 04:27:44.299069 | orchestrator | Thursday 19 March 2026 04:27:27 +0000 (0:00:02.327) 0:00:41.596 ******** 2026-03-19 04:27:44.299078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 04:27:44.299086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 04:27:44.299095 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-19 04:27:44.299105 | orchestrator | 2026-03-19 04:27:44.299114 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-19 04:27:44.299123 | orchestrator | Thursday 19 March 2026 04:27:30 +0000 (0:00:02.992) 0:00:44.589 ******** 2026-03-19 04:27:44.299131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 04:27:44.299140 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 04:27:44.299149 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-19 04:27:44.299158 | orchestrator | 2026-03-19 04:27:44.299174 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-19 04:27:44.299189 | orchestrator | Thursday 19 March 2026 04:27:32 +0000 (0:00:02.409) 0:00:46.998 ******** 2026-03-19 04:27:44.299203 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 04:27:44.299218 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 04:27:44.299233 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-19 04:27:44.299271 | orchestrator | 2026-03-19 04:27:44.299282 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-19 04:27:44.299291 | orchestrator | Thursday 19 March 2026 04:27:35 +0000 (0:00:02.417) 0:00:49.416 ******** 2026-03-19 04:27:44.299299 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 04:27:44.299308 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 04:27:44.299316 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-19 04:27:44.299325 | orchestrator | 2026-03-19 04:27:44.299334 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-19 04:27:44.299342 | orchestrator | Thursday 19 March 2026 04:27:37 +0000 (0:00:02.330) 0:00:51.746 ******** 2026-03-19 04:27:44.299351 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 04:27:44.299359 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 04:27:44.299368 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-19 04:27:44.299376 | orchestrator | 2026-03-19 04:27:44.299460 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-19 04:27:44.299471 | orchestrator | Thursday 19 March 2026 04:27:40 +0000 (0:00:02.456) 0:00:54.203 ******** 2026-03-19 04:27:44.299481 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:27:44.299491 | orchestrator | 2026-03-19 04:27:44.299517 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-19 04:27:44.299528 | orchestrator | Thursday 19 March 2026 04:27:41 +0000 (0:00:01.713) 0:00:55.916 ******** 2026-03-19 04:27:44.299546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:44.299560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:44.299579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:44.299590 | orchestrator | 2026-03-19 04:27:44.299600 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-19 04:27:44.299610 | orchestrator | Thursday 19 March 2026 04:27:44 +0000 (0:00:02.274) 0:00:58.190 ******** 2026-03-19 04:27:44.299633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451104 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:27:53.451211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451230 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:27:53.451247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451291 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:27:53.451304 | orchestrator | 2026-03-19 04:27:53.451321 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-19 04:27:53.451338 | orchestrator | Thursday 19 March 2026 04:27:45 +0000 (0:00:01.525) 0:00:59.716 ******** 2026-03-19 04:27:53.451356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451557 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:27:53.451575 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:27:53.451593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:27:53.451625 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:27:53.451645 | orchestrator | 2026-03-19 04:27:53.451662 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-19 04:27:53.451680 | orchestrator | Thursday 19 March 2026 04:27:47 +0000 (0:00:01.769) 0:01:01.486 ******** 2026-03-19 04:27:53.451693 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:27:53.451706 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:27:53.451723 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:27:53.451740 | orchestrator | 2026-03-19 04:27:53.451756 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-19 04:27:53.451774 | orchestrator | Thursday 19 March 2026 04:27:51 +0000 (0:00:03.842) 0:01:05.328 ******** 2026-03-19 04:27:53.451792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:27:53.451826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:29:42.949257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-19 04:29:42.949367 | orchestrator | 2026-03-19 04:29:42.949378 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-19 04:29:42.949385 | orchestrator | Thursday 19 March 2026 04:27:53 +0000 (0:00:02.230) 0:01:07.559 ******** 2026-03-19 04:29:42.949393 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:29:42.949400 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:29:42.949407 | orchestrator | } 2026-03-19 04:29:42.949413 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:29:42.949420 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:29:42.949426 | orchestrator | } 2026-03-19 04:29:42.949432 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:29:42.949438 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:29:42.949444 | orchestrator | } 2026-03-19 04:29:42.949451 | orchestrator | 2026-03-19 04:29:42.949457 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:29:42.949463 | orchestrator | Thursday 19 March 2026 04:27:54 +0000 (0:00:01.379) 0:01:08.938 ******** 2026-03-19 04:29:42.949471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:29:42.949482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:29:42.949490 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:29:42.949497 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:29:42.949518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-19 04:29:42.949592 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:29:42.949608 | orchestrator | 2026-03-19 04:29:42.949620 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-19 04:29:42.949629 | orchestrator | Thursday 19 March 2026 04:27:56 +0000 (0:00:02.105) 0:01:11.044 ******** 2026-03-19 04:29:42.949640 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:29:42.949650 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:29:42.949660 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:29:42.949670 | orchestrator | 2026-03-19 04:29:42.949681 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 04:29:42.949692 | orchestrator | 2026-03-19 04:29:42.949701 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 04:29:42.949708 | orchestrator | Thursday 19 March 2026 04:27:59 +0000 (0:00:02.082) 0:01:13.126 ******** 2026-03-19 04:29:42.949714 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:29:42.949721 | orchestrator | 2026-03-19 04:29:42.949727 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 04:29:42.949733 | orchestrator | Thursday 19 March 2026 04:28:01 +0000 (0:00:02.226) 0:01:15.353 ******** 2026-03-19 04:29:42.949739 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:29:42.949746 | orchestrator | 2026-03-19 04:29:42.949754 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 04:29:42.949761 | orchestrator | Thursday 19 March 2026 04:28:12 +0000 (0:00:11.082) 0:01:26.436 ******** 2026-03-19 04:29:42.949767 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:29:42.949774 | orchestrator | 2026-03-19 04:29:42.949782 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 04:29:42.949789 | orchestrator | Thursday 19 March 2026 04:28:21 +0000 (0:00:09.171) 0:01:35.607 ******** 2026-03-19 04:29:42.949796 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:29:42.949803 | orchestrator | 2026-03-19 04:29:42.949810 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 04:29:42.949817 | orchestrator | 2026-03-19 04:29:42.949824 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 04:29:42.949831 | orchestrator | Thursday 19 March 2026 04:28:31 +0000 (0:00:10.388) 0:01:45.996 ******** 2026-03-19 04:29:42.949838 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:29:42.949845 | orchestrator | 2026-03-19 04:29:42.949853 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 04:29:42.949859 | orchestrator | Thursday 19 March 2026 04:28:33 +0000 (0:00:01.757) 0:01:47.754 ******** 2026-03-19 04:29:42.949867 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:29:42.949874 | orchestrator | 2026-03-19 04:29:42.949881 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 04:29:42.949889 | orchestrator | Thursday 19 March 2026 04:28:42 +0000 (0:00:09.233) 0:01:56.987 ******** 2026-03-19 04:29:42.949895 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:29:42.949902 | orchestrator | 2026-03-19 04:29:42.949910 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 04:29:42.949917 | orchestrator | Thursday 19 March 2026 04:28:57 +0000 (0:00:14.161) 0:02:11.149 ******** 2026-03-19 04:29:42.949924 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:29:42.949937 | orchestrator | 2026-03-19 04:29:42.949944 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-19 04:29:42.949951 | orchestrator | 2026-03-19 04:29:42.949959 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-19 04:29:42.949966 | orchestrator | Thursday 19 March 2026 04:29:06 +0000 (0:00:09.581) 0:02:20.730 ******** 2026-03-19 04:29:42.949973 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:29:42.949980 | orchestrator | 2026-03-19 04:29:42.949987 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-19 04:29:42.949994 | orchestrator | Thursday 19 March 2026 04:29:08 +0000 (0:00:01.792) 0:02:22.523 ******** 2026-03-19 04:29:42.950001 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:29:42.950008 | orchestrator | 2026-03-19 04:29:42.950078 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-19 04:29:42.950091 | orchestrator | Thursday 19 March 2026 04:29:18 +0000 (0:00:09.894) 0:02:32.417 ******** 2026-03-19 04:29:42.950098 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:29:42.950106 | orchestrator | 2026-03-19 04:29:42.950113 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-19 04:29:42.950119 | orchestrator | Thursday 19 March 2026 04:29:32 +0000 (0:00:14.608) 0:02:47.026 ******** 2026-03-19 04:29:42.950125 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:29:42.950132 | orchestrator | 2026-03-19 04:29:42.950138 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-19 04:29:42.950144 | orchestrator | 2026-03-19 04:29:42.950151 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-19 04:29:42.950164 | orchestrator | Thursday 19 March 2026 04:29:42 +0000 (0:00:10.018) 0:02:57.044 ******** 2026-03-19 04:29:49.125766 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:29:49.125870 | orchestrator | 2026-03-19 04:29:49.125884 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-19 04:29:49.125894 | orchestrator | Thursday 19 March 2026 04:29:44 +0000 (0:00:01.465) 0:02:58.510 ******** 2026-03-19 04:29:49.125903 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:29:49.125914 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:29:49.125923 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:29:49.125931 | orchestrator | 2026-03-19 04:29:49.125941 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:29:49.125951 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:29:49.125962 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 04:29:49.125971 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-19 04:29:49.125980 | orchestrator | 2026-03-19 04:29:49.125989 | orchestrator | 2026-03-19 04:29:49.125998 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:29:49.126007 | orchestrator | Thursday 19 March 2026 04:29:48 +0000 (0:00:04.396) 0:03:02.906 ******** 2026-03-19 04:29:49.126068 | orchestrator | =============================================================================== 2026-03-19 04:29:49.126079 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.94s 2026-03-19 04:29:49.126088 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 30.21s 2026-03-19 04:29:49.126097 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.99s 2026-03-19 04:29:49.126106 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.38s 2026-03-19 04:29:49.126115 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.78s 2026-03-19 04:29:49.126124 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.40s 2026-03-19 04:29:49.126154 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.84s 2026-03-19 04:29:49.126164 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.31s 2026-03-19 04:29:49.126173 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.19s 2026-03-19 04:29:49.126181 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.99s 2026-03-19 04:29:49.126190 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.46s 2026-03-19 04:29:49.126199 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.42s 2026-03-19 04:29:49.126208 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.42s 2026-03-19 04:29:49.126218 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.41s 2026-03-19 04:29:49.126233 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.40s 2026-03-19 04:29:49.126248 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.33s 2026-03-19 04:29:49.126263 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.33s 2026-03-19 04:29:49.126277 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.28s 2026-03-19 04:29:49.126290 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.27s 2026-03-19 04:29:49.126305 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.23s 2026-03-19 04:29:49.401595 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-19 04:29:51.413383 | orchestrator | 2026-03-19 04:29:51 | INFO  | Task 9d4a5da2-6df2-450a-be62-62c73acb8878 (openvswitch) was prepared for execution. 2026-03-19 04:29:51.413500 | orchestrator | 2026-03-19 04:29:51 | INFO  | It takes a moment until task 9d4a5da2-6df2-450a-be62-62c73acb8878 (openvswitch) has been started and output is visible here. 2026-03-19 04:30:15.918133 | orchestrator | 2026-03-19 04:30:15.918265 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:30:15.918286 | orchestrator | 2026-03-19 04:30:15.918300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:30:15.918312 | orchestrator | Thursday 19 March 2026 04:29:57 +0000 (0:00:01.625) 0:00:01.625 ******** 2026-03-19 04:30:15.918324 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:30:15.918338 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:30:15.918348 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:30:15.918355 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:30:15.918363 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:30:15.918371 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:30:15.918378 | orchestrator | 2026-03-19 04:30:15.918399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:30:15.918407 | orchestrator | Thursday 19 March 2026 04:29:59 +0000 (0:00:02.310) 0:00:03.935 ******** 2026-03-19 04:30:15.918414 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918422 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918430 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918437 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918444 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918451 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-19 04:30:15.918459 | orchestrator | 2026-03-19 04:30:15.918466 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-19 04:30:15.918473 | orchestrator | 2026-03-19 04:30:15.918480 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-19 04:30:15.918488 | orchestrator | Thursday 19 March 2026 04:30:02 +0000 (0:00:02.828) 0:00:06.764 ******** 2026-03-19 04:30:15.918516 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:30:15.918526 | orchestrator | 2026-03-19 04:30:15.918533 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-19 04:30:15.918541 | orchestrator | Thursday 19 March 2026 04:30:04 +0000 (0:00:02.285) 0:00:09.050 ******** 2026-03-19 04:30:15.918548 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-19 04:30:15.918556 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-19 04:30:15.918563 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-19 04:30:15.918596 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-19 04:30:15.918609 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-19 04:30:15.918618 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-19 04:30:15.918627 | orchestrator | 2026-03-19 04:30:15.918635 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-19 04:30:15.918643 | orchestrator | Thursday 19 March 2026 04:30:06 +0000 (0:00:02.036) 0:00:11.086 ******** 2026-03-19 04:30:15.918651 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-19 04:30:15.918660 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-19 04:30:15.918668 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-19 04:30:15.918676 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-19 04:30:15.918684 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-19 04:30:15.918692 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-19 04:30:15.918699 | orchestrator | 2026-03-19 04:30:15.918706 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-19 04:30:15.918713 | orchestrator | Thursday 19 March 2026 04:30:09 +0000 (0:00:02.559) 0:00:13.646 ******** 2026-03-19 04:30:15.918720 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-19 04:30:15.918727 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:30:15.918735 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-19 04:30:15.918742 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:30:15.918749 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-19 04:30:15.918756 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:30:15.918763 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-19 04:30:15.918770 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:30:15.918777 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-19 04:30:15.918784 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:30:15.918792 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-19 04:30:15.918799 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:30:15.918805 | orchestrator | 2026-03-19 04:30:15.918813 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-19 04:30:15.918820 | orchestrator | Thursday 19 March 2026 04:30:11 +0000 (0:00:02.112) 0:00:15.759 ******** 2026-03-19 04:30:15.918827 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:30:15.918834 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:30:15.918841 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:30:15.918848 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:30:15.918855 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:30:15.918862 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:30:15.918869 | orchestrator | 2026-03-19 04:30:15.918876 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-19 04:30:15.918883 | orchestrator | Thursday 19 March 2026 04:30:13 +0000 (0:00:01.962) 0:00:17.722 ******** 2026-03-19 04:30:15.918915 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918945 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918953 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918970 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:15.918991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.239956 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240061 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240076 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240090 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240101 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240141 | orchestrator | 2026-03-19 04:30:18.240162 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-19 04:30:18.240184 | orchestrator | Thursday 19 March 2026 04:30:15 +0000 (0:00:02.635) 0:00:20.358 ******** 2026-03-19 04:30:18.240244 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240268 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240284 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240296 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240308 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240333 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:18.240354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281009 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281088 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281095 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281134 | orchestrator | 2026-03-19 04:30:24.281140 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-19 04:30:24.281145 | orchestrator | Thursday 19 March 2026 04:30:19 +0000 (0:00:03.678) 0:00:24.037 ******** 2026-03-19 04:30:24.281149 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:30:24.281154 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:30:24.281158 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:30:24.281162 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:30:24.281166 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:30:24.281170 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:30:24.281174 | orchestrator | 2026-03-19 04:30:24.281178 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-19 04:30:24.281192 | orchestrator | Thursday 19 March 2026 04:30:22 +0000 (0:00:02.694) 0:00:26.731 ******** 2026-03-19 04:30:24.281197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:24.281239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-19 04:30:27.875389 | orchestrator | 2026-03-19 04:30:27.875400 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-19 04:30:27.875411 | orchestrator | Thursday 19 March 2026 04:30:25 +0000 (0:00:03.333) 0:00:30.065 ******** 2026-03-19 04:30:27.875422 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:30:27.875433 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875444 | orchestrator | } 2026-03-19 04:30:27.875454 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:30:27.875463 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875473 | orchestrator | } 2026-03-19 04:30:27.875483 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:30:27.875492 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875502 | orchestrator | } 2026-03-19 04:30:27.875519 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 04:30:27.875529 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875539 | orchestrator | } 2026-03-19 04:30:27.875548 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 04:30:27.875558 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875568 | orchestrator | } 2026-03-19 04:30:27.875577 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 04:30:27.875668 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:30:27.875678 | orchestrator | } 2026-03-19 04:30:27.875688 | orchestrator | 2026-03-19 04:30:27.875698 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:30:27.875708 | orchestrator | Thursday 19 March 2026 04:30:27 +0000 (0:00:01.811) 0:00:31.876 ******** 2026-03-19 04:30:27.875719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:27.875730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:27.875741 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:30:27.875758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:27.875769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:27.875786 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:30:58.407778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:58.407930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:58.407948 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:30:58.407963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:58.407989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:58.408001 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:30:58.408012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:58.408042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:58.408063 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:30:58.408075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-19 04:30:58.408086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-19 04:30:58.408098 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:30:58.408109 | orchestrator | 2026-03-19 04:30:58.408121 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408134 | orchestrator | Thursday 19 March 2026 04:30:29 +0000 (0:00:02.493) 0:00:34.370 ******** 2026-03-19 04:30:58.408144 | orchestrator | 2026-03-19 04:30:58.408155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408166 | orchestrator | Thursday 19 March 2026 04:30:30 +0000 (0:00:00.559) 0:00:34.930 ******** 2026-03-19 04:30:58.408177 | orchestrator | 2026-03-19 04:30:58.408187 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408198 | orchestrator | Thursday 19 March 2026 04:30:30 +0000 (0:00:00.523) 0:00:35.453 ******** 2026-03-19 04:30:58.408209 | orchestrator | 2026-03-19 04:30:58.408222 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408234 | orchestrator | Thursday 19 March 2026 04:30:31 +0000 (0:00:00.515) 0:00:35.969 ******** 2026-03-19 04:30:58.408247 | orchestrator | 2026-03-19 04:30:58.408260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408273 | orchestrator | Thursday 19 March 2026 04:30:32 +0000 (0:00:00.696) 0:00:36.665 ******** 2026-03-19 04:30:58.408287 | orchestrator | 2026-03-19 04:30:58.408299 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-19 04:30:58.408311 | orchestrator | Thursday 19 March 2026 04:30:32 +0000 (0:00:00.512) 0:00:37.178 ******** 2026-03-19 04:30:58.408323 | orchestrator | 2026-03-19 04:30:58.408341 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-19 04:30:58.408354 | orchestrator | Thursday 19 March 2026 04:30:33 +0000 (0:00:00.844) 0:00:38.023 ******** 2026-03-19 04:30:58.408366 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:30:58.408379 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:30:58.408392 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:30:58.408404 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:30:58.408423 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:30:58.408435 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:30:58.408448 | orchestrator | 2026-03-19 04:30:58.408461 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-19 04:30:58.408474 | orchestrator | Thursday 19 March 2026 04:30:45 +0000 (0:00:11.617) 0:00:49.641 ******** 2026-03-19 04:30:58.408487 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:30:58.408500 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:30:58.408513 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:30:58.408525 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:30:58.408538 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:30:58.408550 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:30:58.408563 | orchestrator | 2026-03-19 04:30:58.408575 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-19 04:30:58.408585 | orchestrator | Thursday 19 March 2026 04:30:47 +0000 (0:00:02.186) 0:00:51.827 ******** 2026-03-19 04:30:58.408596 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:30:58.408607 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:30:58.408642 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:30:58.408653 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:30:58.408664 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:30:58.408675 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:30:58.408686 | orchestrator | 2026-03-19 04:30:58.408697 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-19 04:30:58.408714 | orchestrator | Thursday 19 March 2026 04:30:58 +0000 (0:00:11.012) 0:01:02.840 ******** 2026-03-19 04:31:14.539968 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-19 04:31:14.540116 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-19 04:31:14.540143 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-19 04:31:14.540163 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-19 04:31:14.540183 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-19 04:31:14.540203 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-19 04:31:14.540222 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-19 04:31:14.540239 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-19 04:31:14.540258 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-19 04:31:14.540279 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-19 04:31:14.540298 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-19 04:31:14.540317 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-19 04:31:14.540337 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540357 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540378 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540398 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540418 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540468 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-19 04:31:14.540490 | orchestrator | 2026-03-19 04:31:14.540512 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-19 04:31:14.540534 | orchestrator | Thursday 19 March 2026 04:31:06 +0000 (0:00:08.154) 0:01:10.995 ******** 2026-03-19 04:31:14.540554 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-19 04:31:14.540574 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:31:14.540595 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-19 04:31:14.540615 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:31:14.540705 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-19 04:31:14.540728 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:31:14.540748 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-19 04:31:14.540768 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-19 04:31:14.540788 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-19 04:31:14.540809 | orchestrator | 2026-03-19 04:31:14.540828 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-19 04:31:14.540866 | orchestrator | Thursday 19 March 2026 04:31:09 +0000 (0:00:03.313) 0:01:14.309 ******** 2026-03-19 04:31:14.540887 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-19 04:31:14.540907 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:31:14.540927 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-19 04:31:14.540947 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:31:14.540967 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-19 04:31:14.541013 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:31:14.541066 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-19 04:31:14.541086 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-19 04:31:14.541106 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-19 04:31:14.541126 | orchestrator | 2026-03-19 04:31:14.541146 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:31:14.541167 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:31:14.541188 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:31:14.541208 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-19 04:31:14.541228 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:31:14.541275 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:31:14.541296 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-19 04:31:14.541317 | orchestrator | 2026-03-19 04:31:14.541337 | orchestrator | 2026-03-19 04:31:14.541357 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:31:14.541377 | orchestrator | Thursday 19 March 2026 04:31:14 +0000 (0:00:04.276) 0:01:18.586 ******** 2026-03-19 04:31:14.541398 | orchestrator | =============================================================================== 2026-03-19 04:31:14.541417 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.62s 2026-03-19 04:31:14.541437 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.01s 2026-03-19 04:31:14.541457 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.15s 2026-03-19 04:31:14.541490 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.28s 2026-03-19 04:31:14.541511 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.68s 2026-03-19 04:31:14.541530 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.65s 2026-03-19 04:31:14.541550 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.33s 2026-03-19 04:31:14.541570 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.31s 2026-03-19 04:31:14.541590 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.83s 2026-03-19 04:31:14.541610 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.69s 2026-03-19 04:31:14.541629 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.64s 2026-03-19 04:31:14.541670 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.56s 2026-03-19 04:31:14.541690 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.49s 2026-03-19 04:31:14.541710 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.31s 2026-03-19 04:31:14.541730 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.29s 2026-03-19 04:31:14.541749 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.19s 2026-03-19 04:31:14.541769 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.11s 2026-03-19 04:31:14.541789 | orchestrator | module-load : Load modules ---------------------------------------------- 2.04s 2026-03-19 04:31:14.541809 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.96s 2026-03-19 04:31:14.541829 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.81s 2026-03-19 04:31:14.823025 | orchestrator | + osism apply -a upgrade ovn 2026-03-19 04:31:16.871808 | orchestrator | 2026-03-19 04:31:16 | INFO  | Task 9e758ea9-f80b-4902-abd4-904e7c0f6d49 (ovn) was prepared for execution. 2026-03-19 04:31:16.871931 | orchestrator | 2026-03-19 04:31:16 | INFO  | It takes a moment until task 9e758ea9-f80b-4902-abd4-904e7c0f6d49 (ovn) has been started and output is visible here. 2026-03-19 04:31:30.437314 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-19 04:31:30.437446 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-19 04:31:30.437475 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-19 04:31:30.437502 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-19 04:31:30.437536 | orchestrator | 2026-03-19 04:31:30.437553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-19 04:31:30.437568 | orchestrator | 2026-03-19 04:31:30.437578 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-19 04:31:30.437588 | orchestrator | Thursday 19 March 2026 04:31:21 +0000 (0:00:01.106) 0:00:01.106 ******** 2026-03-19 04:31:30.437598 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:31:30.437608 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:31:30.437618 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:31:30.437628 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:31:30.437637 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:31:30.437646 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:31:30.437687 | orchestrator | 2026-03-19 04:31:30.437698 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-19 04:31:30.437707 | orchestrator | Thursday 19 March 2026 04:31:23 +0000 (0:00:02.016) 0:00:03.123 ******** 2026-03-19 04:31:30.437717 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-19 04:31:30.437727 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-19 04:31:30.437760 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-19 04:31:30.437770 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-19 04:31:30.437779 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-19 04:31:30.437789 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-19 04:31:30.437798 | orchestrator | 2026-03-19 04:31:30.437808 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-19 04:31:30.437817 | orchestrator | 2026-03-19 04:31:30.437827 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-19 04:31:30.437836 | orchestrator | Thursday 19 March 2026 04:31:25 +0000 (0:00:01.149) 0:00:04.273 ******** 2026-03-19 04:31:30.437849 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:31:30.437861 | orchestrator | 2026-03-19 04:31:30.437873 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-19 04:31:30.437884 | orchestrator | Thursday 19 March 2026 04:31:26 +0000 (0:00:01.548) 0:00:05.821 ******** 2026-03-19 04:31:30.437897 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437911 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437923 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437934 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437961 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437980 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.437998 | orchestrator | 2026-03-19 04:31:30.438092 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-19 04:31:30.438117 | orchestrator | Thursday 19 March 2026 04:31:27 +0000 (0:00:01.251) 0:00:07.073 ******** 2026-03-19 04:31:30.438135 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438153 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438163 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438173 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438183 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438193 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438202 | orchestrator | 2026-03-19 04:31:30.438212 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-19 04:31:30.438224 | orchestrator | Thursday 19 March 2026 04:31:29 +0000 (0:00:01.385) 0:00:08.458 ******** 2026-03-19 04:31:30.438241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:30.438275 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631180 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631250 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631257 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631266 | orchestrator | 2026-03-19 04:31:34.631271 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-19 04:31:34.631276 | orchestrator | Thursday 19 March 2026 04:31:30 +0000 (0:00:01.109) 0:00:09.568 ******** 2026-03-19 04:31:34.631280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631284 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631291 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631332 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631337 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631341 | orchestrator | 2026-03-19 04:31:34.631345 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-19 04:31:34.631349 | orchestrator | Thursday 19 March 2026 04:31:32 +0000 (0:00:01.939) 0:00:11.507 ******** 2026-03-19 04:31:34.631354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:31:34.631384 | orchestrator | 2026-03-19 04:31:34.631388 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-19 04:31:34.631393 | orchestrator | Thursday 19 March 2026 04:31:33 +0000 (0:00:01.442) 0:00:12.950 ******** 2026-03-19 04:31:34.631397 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:31:34.631402 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:34.631406 | orchestrator | } 2026-03-19 04:31:34.631410 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:31:34.631414 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:34.631417 | orchestrator | } 2026-03-19 04:31:34.631421 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:31:34.631425 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:34.631429 | orchestrator | } 2026-03-19 04:31:34.631433 | orchestrator | changed: [testbed-node-3] => { 2026-03-19 04:31:34.631436 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:34.631440 | orchestrator | } 2026-03-19 04:31:34.631444 | orchestrator | changed: [testbed-node-4] => { 2026-03-19 04:31:34.631448 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:34.631451 | orchestrator | } 2026-03-19 04:31:34.631461 | orchestrator | changed: [testbed-node-5] => { 2026-03-19 04:31:59.564368 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:31:59.564472 | orchestrator | } 2026-03-19 04:31:59.564484 | orchestrator | 2026-03-19 04:31:59.564494 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:31:59.564503 | orchestrator | Thursday 19 March 2026 04:31:34 +0000 (0:00:00.808) 0:00:13.758 ******** 2026-03-19 04:31:59.564514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564526 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:31:59.564535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564544 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:31:59.564553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564561 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:31:59.564569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564578 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:31:59.564587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564664 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:31:59.564675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:31:59.564683 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:31:59.564691 | orchestrator | 2026-03-19 04:31:59.564699 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-19 04:31:59.564708 | orchestrator | Thursday 19 March 2026 04:31:36 +0000 (0:00:01.639) 0:00:15.398 ******** 2026-03-19 04:31:59.564716 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:31:59.564727 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:31:59.564740 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:31:59.564753 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:31:59.564767 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:31:59.564780 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:31:59.564792 | orchestrator | 2026-03-19 04:31:59.564805 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-19 04:31:59.564818 | orchestrator | Thursday 19 March 2026 04:31:38 +0000 (0:00:02.463) 0:00:17.862 ******** 2026-03-19 04:31:59.564829 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-19 04:31:59.564842 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-19 04:31:59.564866 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-19 04:31:59.564913 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-19 04:31:59.564929 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-19 04:31:59.564943 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-19 04:31:59.564956 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-19 04:31:59.564970 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-19 04:31:59.564983 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.564996 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.565009 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.565022 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.565036 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.565049 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-19 04:31:59.565064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565079 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565091 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565118 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-19 04:31:59.565135 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565143 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565151 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565159 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565167 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565174 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-19 04:31:59.565182 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565190 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565198 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565206 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565214 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565222 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-19 04:31:59.565230 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565238 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565245 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565253 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565261 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565269 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-19 04:31:59.565277 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 04:31:59.565285 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 04:31:59.565293 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 04:31:59.565301 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-19 04:31:59.565322 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 04:34:21.438266 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-19 04:34:21.438384 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-19 04:34:21.438414 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-19 04:34:21.438423 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-19 04:34:21.438449 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-19 04:34:21.438455 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-19 04:34:21.438462 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-19 04:34:21.438469 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 04:34:21.438476 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 04:34:21.438482 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-19 04:34:21.438488 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 04:34:21.438496 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 04:34:21.438503 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-19 04:34:21.438509 | orchestrator | 2026-03-19 04:34:21.438517 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438523 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:20.334) 0:00:38.196 ******** 2026-03-19 04:34:21.438531 | orchestrator | 2026-03-19 04:34:21.438541 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438553 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.083) 0:00:38.280 ******** 2026-03-19 04:34:21.438569 | orchestrator | 2026-03-19 04:34:21.438578 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438589 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.075) 0:00:38.355 ******** 2026-03-19 04:34:21.438598 | orchestrator | 2026-03-19 04:34:21.438608 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438618 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.078) 0:00:38.434 ******** 2026-03-19 04:34:21.438628 | orchestrator | 2026-03-19 04:34:21.438638 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438649 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.074) 0:00:38.509 ******** 2026-03-19 04:34:21.438659 | orchestrator | 2026-03-19 04:34:21.438670 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-19 04:34:21.438679 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.079) 0:00:38.588 ******** 2026-03-19 04:34:21.438689 | orchestrator | 2026-03-19 04:34:21.438699 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-19 04:34:21.438709 | orchestrator | Thursday 19 March 2026 04:31:59 +0000 (0:00:00.073) 0:00:38.661 ******** 2026-03-19 04:34:21.438718 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:34:21.438728 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:34:21.438739 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:34:21.438749 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:34:21.438758 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:34:21.438770 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:34:21.438776 | orchestrator | 2026-03-19 04:34:21.438782 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-19 04:34:21.438788 | orchestrator | 2026-03-19 04:34:21.438795 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 04:34:21.438801 | orchestrator | Thursday 19 March 2026 04:34:10 +0000 (0:02:10.787) 0:02:49.449 ******** 2026-03-19 04:34:21.438808 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:34:21.438823 | orchestrator | 2026-03-19 04:34:21.438831 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 04:34:21.438838 | orchestrator | Thursday 19 March 2026 04:34:11 +0000 (0:00:01.135) 0:02:50.584 ******** 2026-03-19 04:34:21.438845 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-19 04:34:21.438853 | orchestrator | 2026-03-19 04:34:21.438860 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-19 04:34:21.438867 | orchestrator | Thursday 19 March 2026 04:34:12 +0000 (0:00:01.082) 0:02:51.667 ******** 2026-03-19 04:34:21.438875 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.438883 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.438892 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.438903 | orchestrator | 2026-03-19 04:34:21.438921 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-19 04:34:21.438951 | orchestrator | Thursday 19 March 2026 04:34:13 +0000 (0:00:00.873) 0:02:52.541 ******** 2026-03-19 04:34:21.438964 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.438972 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.438979 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.438986 | orchestrator | 2026-03-19 04:34:21.438993 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-19 04:34:21.439001 | orchestrator | Thursday 19 March 2026 04:34:13 +0000 (0:00:00.349) 0:02:52.890 ******** 2026-03-19 04:34:21.439008 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439015 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439022 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439028 | orchestrator | 2026-03-19 04:34:21.439035 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-19 04:34:21.439041 | orchestrator | Thursday 19 March 2026 04:34:14 +0000 (0:00:00.343) 0:02:53.234 ******** 2026-03-19 04:34:21.439047 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439053 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439059 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439089 | orchestrator | 2026-03-19 04:34:21.439097 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-19 04:34:21.439103 | orchestrator | Thursday 19 March 2026 04:34:14 +0000 (0:00:00.542) 0:02:53.776 ******** 2026-03-19 04:34:21.439109 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439115 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439121 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439127 | orchestrator | 2026-03-19 04:34:21.439133 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-19 04:34:21.439139 | orchestrator | Thursday 19 March 2026 04:34:14 +0000 (0:00:00.357) 0:02:54.134 ******** 2026-03-19 04:34:21.439145 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:34:21.439151 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:34:21.439157 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:34:21.439163 | orchestrator | 2026-03-19 04:34:21.439169 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-19 04:34:21.439175 | orchestrator | Thursday 19 March 2026 04:34:15 +0000 (0:00:00.313) 0:02:54.447 ******** 2026-03-19 04:34:21.439181 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439187 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439193 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439199 | orchestrator | 2026-03-19 04:34:21.439206 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-19 04:34:21.439212 | orchestrator | Thursday 19 March 2026 04:34:16 +0000 (0:00:00.739) 0:02:55.187 ******** 2026-03-19 04:34:21.439218 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439224 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439230 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439236 | orchestrator | 2026-03-19 04:34:21.439242 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-19 04:34:21.439256 | orchestrator | Thursday 19 March 2026 04:34:16 +0000 (0:00:00.520) 0:02:55.708 ******** 2026-03-19 04:34:21.439262 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439268 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439275 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439281 | orchestrator | 2026-03-19 04:34:21.439287 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-19 04:34:21.439294 | orchestrator | Thursday 19 March 2026 04:34:17 +0000 (0:00:00.831) 0:02:56.539 ******** 2026-03-19 04:34:21.439304 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439322 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439333 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439343 | orchestrator | 2026-03-19 04:34:21.439353 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-19 04:34:21.439363 | orchestrator | Thursday 19 March 2026 04:34:17 +0000 (0:00:00.349) 0:02:56.889 ******** 2026-03-19 04:34:21.439372 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:34:21.439380 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:34:21.439388 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:34:21.439399 | orchestrator | 2026-03-19 04:34:21.439411 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-19 04:34:21.439421 | orchestrator | Thursday 19 March 2026 04:34:18 +0000 (0:00:00.513) 0:02:57.403 ******** 2026-03-19 04:34:21.439430 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:34:21.439441 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:34:21.439450 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:34:21.439459 | orchestrator | 2026-03-19 04:34:21.439468 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-19 04:34:21.439478 | orchestrator | Thursday 19 March 2026 04:34:18 +0000 (0:00:00.340) 0:02:57.743 ******** 2026-03-19 04:34:21.439489 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439500 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439509 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439518 | orchestrator | 2026-03-19 04:34:21.439529 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-19 04:34:21.439538 | orchestrator | Thursday 19 March 2026 04:34:19 +0000 (0:00:00.740) 0:02:58.483 ******** 2026-03-19 04:34:21.439549 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439560 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439569 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439579 | orchestrator | 2026-03-19 04:34:21.439588 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-19 04:34:21.439594 | orchestrator | Thursday 19 March 2026 04:34:19 +0000 (0:00:00.345) 0:02:58.829 ******** 2026-03-19 04:34:21.439600 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439606 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439613 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439619 | orchestrator | 2026-03-19 04:34:21.439625 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-19 04:34:21.439631 | orchestrator | Thursday 19 March 2026 04:34:20 +0000 (0:00:01.035) 0:02:59.864 ******** 2026-03-19 04:34:21.439638 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:34:21.439644 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:34:21.439650 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:34:21.439656 | orchestrator | 2026-03-19 04:34:21.439662 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-19 04:34:21.439668 | orchestrator | Thursday 19 March 2026 04:34:21 +0000 (0:00:00.365) 0:03:00.230 ******** 2026-03-19 04:34:21.439676 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:34:21.439693 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:34:21.439709 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:34:21.439721 | orchestrator | 2026-03-19 04:34:21.439739 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-19 04:34:30.048012 | orchestrator | Thursday 19 March 2026 04:34:21 +0000 (0:00:00.333) 0:03:00.563 ******** 2026-03-19 04:34:30.048135 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:34:30.048160 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:34:30.048166 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:34:30.048170 | orchestrator | 2026-03-19 04:34:30.048175 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-19 04:34:30.048180 | orchestrator | Thursday 19 March 2026 04:34:22 +0000 (0:00:00.689) 0:03:01.253 ******** 2026-03-19 04:34:30.048187 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048207 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048212 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048244 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:30.048260 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:30.048270 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:30.048280 | orchestrator | 2026-03-19 04:34:30.048284 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-19 04:34:30.048289 | orchestrator | Thursday 19 March 2026 04:34:25 +0000 (0:00:02.946) 0:03:04.199 ******** 2026-03-19 04:34:30.048294 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:30.048315 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157684 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157832 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157847 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:40.157877 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:40.157959 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.157971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:40.157981 | orchestrator | 2026-03-19 04:34:40.157992 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-19 04:34:40.158002 | orchestrator | Thursday 19 March 2026 04:34:30 +0000 (0:00:04.980) 0:03:09.180 ******** 2026-03-19 04:34:40.158133 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-19 04:34:40.158157 | orchestrator | 2026-03-19 04:34:40.158166 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-19 04:34:40.158175 | orchestrator | Thursday 19 March 2026 04:34:30 +0000 (0:00:00.930) 0:03:10.110 ******** 2026-03-19 04:34:40.158184 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:34:40.158194 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:34:40.158204 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:34:40.158214 | orchestrator | 2026-03-19 04:34:40.158224 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-19 04:34:40.158235 | orchestrator | Thursday 19 March 2026 04:34:31 +0000 (0:00:00.889) 0:03:10.999 ******** 2026-03-19 04:34:40.158245 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:34:40.158258 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:34:40.158273 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:34:40.158289 | orchestrator | 2026-03-19 04:34:40.158304 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-19 04:34:40.158320 | orchestrator | Thursday 19 March 2026 04:34:33 +0000 (0:00:01.655) 0:03:12.655 ******** 2026-03-19 04:34:40.158334 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:34:40.158349 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:34:40.158364 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:34:40.158380 | orchestrator | 2026-03-19 04:34:40.158396 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-19 04:34:40.158411 | orchestrator | Thursday 19 March 2026 04:34:35 +0000 (0:00:01.878) 0:03:14.534 ******** 2026-03-19 04:34:40.158429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.158461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.158476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.158502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:40.158532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:42.867129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:42.867258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:42.867284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:42.867336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:34:42.867454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867478 | orchestrator | 2026-03-19 04:34:42.867499 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-19 04:34:42.867521 | orchestrator | Thursday 19 March 2026 04:34:40 +0000 (0:00:04.747) 0:03:19.282 ******** 2026-03-19 04:34:42.867541 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:34:42.867559 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:34:42.867577 | orchestrator | } 2026-03-19 04:34:42.867597 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:34:42.867615 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:34:42.867632 | orchestrator | } 2026-03-19 04:34:42.867649 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:34:42.867666 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:34:42.867684 | orchestrator | } 2026-03-19 04:34:42.867701 | orchestrator | 2026-03-19 04:34:42.867745 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-19 04:34:42.867764 | orchestrator | Thursday 19 March 2026 04:34:40 +0000 (0:00:00.396) 0:03:19.678 ******** 2026-03-19 04:34:42.867783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:34:42.867959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:35:57.443200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-19 04:35:57.443346 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-19 04:35:57.443372 | orchestrator | 2026-03-19 04:35:57.443394 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-19 04:35:57.443414 | orchestrator | Thursday 19 March 2026 04:34:42 +0000 (0:00:02.311) 0:03:21.990 ******** 2026-03-19 04:35:57.443433 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-19 04:35:57.443452 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-19 04:35:57.443471 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-19 04:35:57.443491 | orchestrator | 2026-03-19 04:35:57.443503 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-19 04:35:57.443515 | orchestrator | Thursday 19 March 2026 04:34:44 +0000 (0:00:01.202) 0:03:23.193 ******** 2026-03-19 04:35:57.443526 | orchestrator | changed: [testbed-node-0] => { 2026-03-19 04:35:57.443537 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:35:57.443548 | orchestrator | } 2026-03-19 04:35:57.443559 | orchestrator | changed: [testbed-node-1] => { 2026-03-19 04:35:57.443570 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:35:57.443581 | orchestrator | } 2026-03-19 04:35:57.443592 | orchestrator | changed: [testbed-node-2] => { 2026-03-19 04:35:57.443603 | orchestrator |  "msg": "Notifying handlers" 2026-03-19 04:35:57.443613 | orchestrator | } 2026-03-19 04:35:57.443624 | orchestrator | 2026-03-19 04:35:57.443635 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 04:35:57.443646 | orchestrator | Thursday 19 March 2026 04:34:44 +0000 (0:00:00.576) 0:03:23.769 ******** 2026-03-19 04:35:57.443657 | orchestrator | 2026-03-19 04:35:57.443667 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 04:35:57.443678 | orchestrator | Thursday 19 March 2026 04:34:44 +0000 (0:00:00.071) 0:03:23.841 ******** 2026-03-19 04:35:57.443688 | orchestrator | 2026-03-19 04:35:57.443700 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-19 04:35:57.443711 | orchestrator | Thursday 19 March 2026 04:34:44 +0000 (0:00:00.071) 0:03:23.913 ******** 2026-03-19 04:35:57.443721 | orchestrator | 2026-03-19 04:35:57.443734 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-19 04:35:57.443748 | orchestrator | Thursday 19 March 2026 04:34:44 +0000 (0:00:00.072) 0:03:23.985 ******** 2026-03-19 04:35:57.443761 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:35:57.443773 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:35:57.443809 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:35:57.443822 | orchestrator | 2026-03-19 04:35:57.443849 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-19 04:35:57.443862 | orchestrator | Thursday 19 March 2026 04:34:59 +0000 (0:00:14.811) 0:03:38.797 ******** 2026-03-19 04:35:57.443874 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:35:57.443887 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:35:57.443900 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:35:57.443912 | orchestrator | 2026-03-19 04:35:57.443925 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-19 04:35:57.443937 | orchestrator | Thursday 19 March 2026 04:35:14 +0000 (0:00:14.778) 0:03:53.576 ******** 2026-03-19 04:35:57.443961 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-19 04:35:57.443978 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-19 04:35:57.443998 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-19 04:35:57.444029 | orchestrator | 2026-03-19 04:35:57.444049 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-19 04:35:57.444067 | orchestrator | Thursday 19 March 2026 04:35:28 +0000 (0:00:14.177) 0:04:07.753 ******** 2026-03-19 04:35:57.444110 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:35:57.444145 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:35:57.444164 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:35:57.444183 | orchestrator | 2026-03-19 04:35:57.444201 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-19 04:35:57.444219 | orchestrator | Thursday 19 March 2026 04:35:44 +0000 (0:00:16.072) 0:04:23.825 ******** 2026-03-19 04:35:57.444231 | orchestrator | Pausing for 5 seconds 2026-03-19 04:35:57.444242 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:35:57.444253 | orchestrator | 2026-03-19 04:35:57.444264 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-19 04:35:57.444275 | orchestrator | Thursday 19 March 2026 04:35:49 +0000 (0:00:05.158) 0:04:28.984 ******** 2026-03-19 04:35:57.444286 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:35:57.444297 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:35:57.444308 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:35:57.444319 | orchestrator | 2026-03-19 04:35:57.444330 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-19 04:35:57.444360 | orchestrator | Thursday 19 March 2026 04:35:50 +0000 (0:00:00.835) 0:04:29.819 ******** 2026-03-19 04:35:57.444372 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:35:57.444383 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:35:57.444394 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:35:57.444405 | orchestrator | 2026-03-19 04:35:57.444415 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-19 04:35:57.444427 | orchestrator | Thursday 19 March 2026 04:35:51 +0000 (0:00:00.733) 0:04:30.553 ******** 2026-03-19 04:35:57.444437 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:35:57.444448 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:35:57.444459 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:35:57.444470 | orchestrator | 2026-03-19 04:35:57.444481 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-19 04:35:57.444491 | orchestrator | Thursday 19 March 2026 04:35:52 +0000 (0:00:00.819) 0:04:31.373 ******** 2026-03-19 04:35:57.444502 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:35:57.444513 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:35:57.444524 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:35:57.444535 | orchestrator | 2026-03-19 04:35:57.444546 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-19 04:35:57.444556 | orchestrator | Thursday 19 March 2026 04:35:52 +0000 (0:00:00.710) 0:04:32.083 ******** 2026-03-19 04:35:57.444567 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:35:57.444578 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:35:57.444589 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:35:57.444600 | orchestrator | 2026-03-19 04:35:57.444610 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-19 04:35:57.444621 | orchestrator | Thursday 19 March 2026 04:35:53 +0000 (0:00:00.842) 0:04:32.926 ******** 2026-03-19 04:35:57.444632 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:35:57.444643 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:35:57.444654 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:35:57.444668 | orchestrator | 2026-03-19 04:35:57.444686 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-19 04:35:57.444716 | orchestrator | Thursday 19 March 2026 04:35:54 +0000 (0:00:00.801) 0:04:33.727 ******** 2026-03-19 04:35:57.444734 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-19 04:35:57.444752 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-19 04:35:57.444783 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-19 04:35:57.444833 | orchestrator | 2026-03-19 04:35:57.444851 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 04:35:57.444869 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 04:35:57.444889 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-19 04:35:57.444906 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-19 04:35:57.444925 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:35:57.444944 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:35:57.444962 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 04:35:57.444980 | orchestrator | 2026-03-19 04:35:57.444992 | orchestrator | 2026-03-19 04:35:57.445002 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 04:35:57.445022 | orchestrator | Thursday 19 March 2026 04:35:57 +0000 (0:00:02.830) 0:04:36.558 ******** 2026-03-19 04:35:57.445033 | orchestrator | =============================================================================== 2026-03-19 04:35:57.445044 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 130.79s 2026-03-19 04:35:57.445055 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.33s 2026-03-19 04:35:57.445065 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.07s 2026-03-19 04:35:57.445076 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.81s 2026-03-19 04:35:57.445086 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.78s 2026-03-19 04:35:57.445097 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 14.18s 2026-03-19 04:35:57.445108 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.16s 2026-03-19 04:35:57.445118 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.98s 2026-03-19 04:35:57.445129 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.75s 2026-03-19 04:35:57.445139 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.95s 2026-03-19 04:35:57.445149 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.83s 2026-03-19 04:35:57.445160 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.46s 2026-03-19 04:35:57.445171 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.31s 2026-03-19 04:35:57.445181 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.01s 2026-03-19 04:35:57.445192 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.94s 2026-03-19 04:35:57.445202 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.88s 2026-03-19 04:35:57.445224 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.66s 2026-03-19 04:35:57.777714 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.64s 2026-03-19 04:35:57.777858 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.55s 2026-03-19 04:35:57.777875 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.44s 2026-03-19 04:35:58.064727 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-19 04:35:58.064868 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-19 04:35:58.064914 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-19 04:35:58.069715 | orchestrator | + set -e 2026-03-19 04:35:58.069772 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 04:35:58.069778 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 04:35:58.069815 | orchestrator | ++ INTERACTIVE=false 2026-03-19 04:35:58.069822 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 04:35:58.069829 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 04:35:58.069835 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-19 04:36:00.174260 | orchestrator | 2026-03-19 04:36:00 | INFO  | Task dc2f36c3-0a34-4ae2-b582-1c502c7d83e4 (ceph-rolling_update) was prepared for execution. 2026-03-19 04:36:00.174365 | orchestrator | 2026-03-19 04:36:00 | INFO  | It takes a moment until task dc2f36c3-0a34-4ae2-b582-1c502c7d83e4 (ceph-rolling_update) has been started and output is visible here. 2026-03-19 04:36:56.745971 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-19 04:36:56.746127 | orchestrator | 2.16.14 2026-03-19 04:36:56.746145 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-19 04:36:56.746157 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-19 04:36:56.746179 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-19 04:36:56.746189 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-19 04:36:56.746218 | orchestrator | 2026-03-19 04:36:56.746235 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-19 04:36:56.746251 | orchestrator | 2026-03-19 04:36:56.746267 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-19 04:36:56.746283 | orchestrator | Thursday 19 March 2026 04:36:07 +0000 (0:00:01.169) 0:00:01.169 ******** 2026-03-19 04:36:56.746299 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-19 04:36:56.746315 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-19 04:36:56.746331 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-19 04:36:56.746347 | orchestrator | skipping: [localhost] 2026-03-19 04:36:56.746363 | orchestrator | 2026-03-19 04:36:56.746381 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-19 04:36:56.746397 | orchestrator | 2026-03-19 04:36:56.746412 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-19 04:36:56.746430 | orchestrator | Thursday 19 March 2026 04:36:08 +0000 (0:00:00.914) 0:00:02.084 ******** 2026-03-19 04:36:56.746445 | orchestrator | ok: [testbed-node-0] => { 2026-03-19 04:36:56.746463 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746482 | orchestrator | } 2026-03-19 04:36:56.746499 | orchestrator | ok: [testbed-node-1] => { 2026-03-19 04:36:56.746516 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746534 | orchestrator | } 2026-03-19 04:36:56.746552 | orchestrator | ok: [testbed-node-2] => { 2026-03-19 04:36:56.746570 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746587 | orchestrator | } 2026-03-19 04:36:56.746599 | orchestrator | ok: [testbed-node-3] => { 2026-03-19 04:36:56.746608 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746618 | orchestrator | } 2026-03-19 04:36:56.746663 | orchestrator | ok: [testbed-node-4] => { 2026-03-19 04:36:56.746674 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746684 | orchestrator | } 2026-03-19 04:36:56.746694 | orchestrator | ok: [testbed-node-5] => { 2026-03-19 04:36:56.746704 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746739 | orchestrator | } 2026-03-19 04:36:56.746750 | orchestrator | ok: [testbed-manager] => { 2026-03-19 04:36:56.746760 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-19 04:36:56.746770 | orchestrator | } 2026-03-19 04:36:56.746779 | orchestrator | 2026-03-19 04:36:56.746789 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-19 04:36:56.746799 | orchestrator | Thursday 19 March 2026 04:36:10 +0000 (0:00:01.814) 0:00:03.899 ******** 2026-03-19 04:36:56.746809 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:36:56.746819 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:36:56.746828 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:36:56.746838 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:36:56.746848 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:36:56.746858 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:36:56.746868 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.746877 | orchestrator | 2026-03-19 04:36:56.746887 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-19 04:36:56.746897 | orchestrator | Thursday 19 March 2026 04:36:14 +0000 (0:00:03.673) 0:00:07.573 ******** 2026-03-19 04:36:56.746907 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:36:56.746917 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:36:56.746926 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:36:56.746936 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:36:56.746945 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:36:56.746955 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:36:56.746965 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:36:56.746975 | orchestrator | 2026-03-19 04:36:56.747070 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-19 04:36:56.747091 | orchestrator | Thursday 19 March 2026 04:36:44 +0000 (0:00:30.387) 0:00:37.960 ******** 2026-03-19 04:36:56.747101 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747111 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747120 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747130 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747140 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747149 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747159 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747168 | orchestrator | 2026-03-19 04:36:56.747178 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:36:56.747188 | orchestrator | Thursday 19 March 2026 04:36:45 +0000 (0:00:00.938) 0:00:38.899 ******** 2026-03-19 04:36:56.747224 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-19 04:36:56.747243 | orchestrator | 2026-03-19 04:36:56.747260 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:36:56.747276 | orchestrator | Thursday 19 March 2026 04:36:47 +0000 (0:00:01.768) 0:00:40.667 ******** 2026-03-19 04:36:56.747292 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747308 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747326 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747344 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747361 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747378 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747395 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747411 | orchestrator | 2026-03-19 04:36:56.747428 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:36:56.747444 | orchestrator | Thursday 19 March 2026 04:36:48 +0000 (0:00:01.320) 0:00:41.987 ******** 2026-03-19 04:36:56.747474 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747491 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747509 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747525 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747543 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747560 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747577 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747595 | orchestrator | 2026-03-19 04:36:56.747611 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:36:56.747652 | orchestrator | Thursday 19 March 2026 04:36:49 +0000 (0:00:00.742) 0:00:42.730 ******** 2026-03-19 04:36:56.747663 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747673 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747682 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747691 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747701 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747710 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747720 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747729 | orchestrator | 2026-03-19 04:36:56.747739 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:36:56.747750 | orchestrator | Thursday 19 March 2026 04:36:50 +0000 (0:00:01.236) 0:00:43.967 ******** 2026-03-19 04:36:56.747765 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747780 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747796 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747811 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747826 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747840 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747850 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747859 | orchestrator | 2026-03-19 04:36:56.747879 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:36:56.747888 | orchestrator | Thursday 19 March 2026 04:36:51 +0000 (0:00:00.782) 0:00:44.749 ******** 2026-03-19 04:36:56.747898 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.747907 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.747917 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.747926 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.747936 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.747945 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.747954 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.747964 | orchestrator | 2026-03-19 04:36:56.747973 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:36:56.747983 | orchestrator | Thursday 19 March 2026 04:36:52 +0000 (0:00:00.921) 0:00:45.671 ******** 2026-03-19 04:36:56.747992 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.748001 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.748011 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.748020 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.748029 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.748039 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.748048 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.748058 | orchestrator | 2026-03-19 04:36:56.748067 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:36:56.748077 | orchestrator | Thursday 19 March 2026 04:36:53 +0000 (0:00:00.773) 0:00:46.445 ******** 2026-03-19 04:36:56.748086 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:36:56.748096 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:36:56.748109 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:36:56.748129 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:36:56.748146 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:36:56.748161 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:36:56.748176 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:36:56.748190 | orchestrator | 2026-03-19 04:36:56.748205 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:36:56.748221 | orchestrator | Thursday 19 March 2026 04:36:54 +0000 (0:00:00.952) 0:00:47.398 ******** 2026-03-19 04:36:56.748248 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.748265 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.748282 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.748298 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.748315 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.748325 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.748335 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.748345 | orchestrator | 2026-03-19 04:36:56.748355 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:36:56.748364 | orchestrator | Thursday 19 March 2026 04:36:54 +0000 (0:00:00.679) 0:00:48.077 ******** 2026-03-19 04:36:56.748374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:36:56.748384 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:36:56.748393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:36:56.748403 | orchestrator | 2026-03-19 04:36:56.748412 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:36:56.748422 | orchestrator | Thursday 19 March 2026 04:36:55 +0000 (0:00:01.035) 0:00:49.113 ******** 2026-03-19 04:36:56.748431 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:36:56.748441 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:36:56.748451 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:36:56.748460 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:36:56.748477 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:36:56.748495 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:36:56.748511 | orchestrator | ok: [testbed-manager] 2026-03-19 04:36:56.748527 | orchestrator | 2026-03-19 04:36:56.748543 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:36:56.748574 | orchestrator | Thursday 19 March 2026 04:36:56 +0000 (0:00:00.881) 0:00:49.994 ******** 2026-03-19 04:37:08.413328 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:37:08.413464 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:37:08.413482 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:37:08.413494 | orchestrator | 2026-03-19 04:37:08.413507 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:37:08.413519 | orchestrator | Thursday 19 March 2026 04:36:59 +0000 (0:00:02.328) 0:00:52.323 ******** 2026-03-19 04:37:08.413531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:37:08.413543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:37:08.413554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:37:08.413565 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.413576 | orchestrator | 2026-03-19 04:37:08.413587 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:37:08.413651 | orchestrator | Thursday 19 March 2026 04:36:59 +0000 (0:00:00.398) 0:00:52.721 ******** 2026-03-19 04:37:08.413664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413702 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.413714 | orchestrator | 2026-03-19 04:37:08.413741 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:37:08.413776 | orchestrator | Thursday 19 March 2026 04:37:00 +0000 (0:00:00.863) 0:00:53.585 ******** 2026-03-19 04:37:08.413790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413805 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413816 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:08.413827 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.413838 | orchestrator | 2026-03-19 04:37:08.413850 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:37:08.413863 | orchestrator | Thursday 19 March 2026 04:37:00 +0000 (0:00:00.170) 0:00:53.756 ******** 2026-03-19 04:37:08.413879 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e6aaaabd2759', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:36:57.430922', 'end': '2026-03-19 04:36:57.482030', 'delta': '0:00:00.051108', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e6aaaabd2759'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:37:08.413917 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7d1c29d08d66', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:36:58.286844', 'end': '2026-03-19 04:36:58.346967', 'delta': '0:00:00.060123', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d1c29d08d66'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:37:08.413932 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '115813b5cae5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:36:58.848775', 'end': '2026-03-19 04:36:58.899620', 'delta': '0:00:00.050845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['115813b5cae5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:37:08.413953 | orchestrator | 2026-03-19 04:37:08.413967 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:37:08.413980 | orchestrator | Thursday 19 March 2026 04:37:00 +0000 (0:00:00.382) 0:00:54.138 ******** 2026-03-19 04:37:08.413994 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:08.414007 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:08.414083 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:08.414095 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:08.414106 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:08.414117 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:08.414128 | orchestrator | ok: [testbed-manager] 2026-03-19 04:37:08.414139 | orchestrator | 2026-03-19 04:37:08.414150 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:37:08.414162 | orchestrator | Thursday 19 March 2026 04:37:01 +0000 (0:00:00.882) 0:00:55.021 ******** 2026-03-19 04:37:08.414173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.414184 | orchestrator | 2026-03-19 04:37:08.414195 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:37:08.414206 | orchestrator | Thursday 19 March 2026 04:37:01 +0000 (0:00:00.234) 0:00:55.255 ******** 2026-03-19 04:37:08.414217 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:08.414228 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:08.414238 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:08.414249 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:08.414260 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:08.414271 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:08.414281 | orchestrator | ok: [testbed-manager] 2026-03-19 04:37:08.414292 | orchestrator | 2026-03-19 04:37:08.414303 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:37:08.414314 | orchestrator | Thursday 19 March 2026 04:37:02 +0000 (0:00:00.929) 0:00:56.185 ******** 2026-03-19 04:37:08.414325 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:08.414336 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414357 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414368 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414379 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-19 04:37:08.414401 | orchestrator | 2026-03-19 04:37:08.414411 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:37:08.414422 | orchestrator | Thursday 19 March 2026 04:37:05 +0000 (0:00:02.841) 0:00:59.027 ******** 2026-03-19 04:37:08.414433 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:08.414444 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:08.414454 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:08.414465 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:08.414476 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:08.414487 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:08.414498 | orchestrator | ok: [testbed-manager] 2026-03-19 04:37:08.414509 | orchestrator | 2026-03-19 04:37:08.414520 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:37:08.414531 | orchestrator | Thursday 19 March 2026 04:37:06 +0000 (0:00:00.953) 0:00:59.981 ******** 2026-03-19 04:37:08.414542 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.414553 | orchestrator | 2026-03-19 04:37:08.414564 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:37:08.414575 | orchestrator | Thursday 19 March 2026 04:37:06 +0000 (0:00:00.135) 0:01:00.116 ******** 2026-03-19 04:37:08.414591 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.414693 | orchestrator | 2026-03-19 04:37:08.414705 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:37:08.414726 | orchestrator | Thursday 19 March 2026 04:37:07 +0000 (0:00:00.227) 0:01:00.344 ******** 2026-03-19 04:37:08.414737 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:08.414748 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:08.414759 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:08.414769 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:08.414779 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:08.414798 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.608722 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.608836 | orchestrator | 2026-03-19 04:37:13.608854 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:37:13.608868 | orchestrator | Thursday 19 March 2026 04:37:08 +0000 (0:00:01.315) 0:01:01.660 ******** 2026-03-19 04:37:13.608880 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.608891 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.608902 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.608914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.608925 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.608936 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.608946 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.608957 | orchestrator | 2026-03-19 04:37:13.608969 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:37:13.608981 | orchestrator | Thursday 19 March 2026 04:37:09 +0000 (0:00:00.812) 0:01:02.473 ******** 2026-03-19 04:37:13.609000 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.609023 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.609048 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.609065 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.609083 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.609100 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.609118 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.609135 | orchestrator | 2026-03-19 04:37:13.609152 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:37:13.609170 | orchestrator | Thursday 19 March 2026 04:37:10 +0000 (0:00:00.901) 0:01:03.374 ******** 2026-03-19 04:37:13.609188 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.609206 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.609225 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.609243 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.609261 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.609278 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.609296 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.609314 | orchestrator | 2026-03-19 04:37:13.609332 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:37:13.609351 | orchestrator | Thursday 19 March 2026 04:37:10 +0000 (0:00:00.715) 0:01:04.090 ******** 2026-03-19 04:37:13.609371 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.609391 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.609430 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.609450 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.609469 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.609487 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.609505 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.609521 | orchestrator | 2026-03-19 04:37:13.609537 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:37:13.609557 | orchestrator | Thursday 19 March 2026 04:37:11 +0000 (0:00:00.963) 0:01:05.054 ******** 2026-03-19 04:37:13.609575 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.609698 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.609711 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.609722 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.609733 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.609744 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.609778 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.609790 | orchestrator | 2026-03-19 04:37:13.609801 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:37:13.609812 | orchestrator | Thursday 19 March 2026 04:37:12 +0000 (0:00:00.718) 0:01:05.772 ******** 2026-03-19 04:37:13.609823 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.609834 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:13.609845 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:13.609856 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:13.609866 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:13.609877 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:13.609888 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:13.609898 | orchestrator | 2026-03-19 04:37:13.609909 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:37:13.609920 | orchestrator | Thursday 19 March 2026 04:37:13 +0000 (0:00:00.925) 0:01:06.698 ******** 2026-03-19 04:37:13.609934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.609950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.609962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.609998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:13.610012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.610083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.610103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.610129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:13.610156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866363 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:13.866371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:13.866419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:13.866458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:13.866482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:14.016254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:14.016430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.016462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}})  2026-03-19 04:37:14.016543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.016556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}})  2026-03-19 04:37:14.016569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.016642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:14.144462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144666 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:14.144680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}})  2026-03-19 04:37:14.144694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}})  2026-03-19 04:37:14.144707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.144827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.144876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}})  2026-03-19 04:37:14.144903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.307476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}})  2026-03-19 04:37:14.307664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:14.307713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307725 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:14.307738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}})  2026-03-19 04:37:14.307826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}})  2026-03-19 04:37:14.307838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.307855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.307881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}})  2026-03-19 04:37:14.438743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.438774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}})  2026-03-19 04:37:14.438784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:14.438840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438849 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:14.438859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.438886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}})  2026-03-19 04:37:14.438895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}})  2026-03-19 04:37:14.438912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.571389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571426 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571492 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:37:14.571501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571516 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:14.571526 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.571561 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a587d6ca', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part16', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part14', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part15', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part1', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:37:14.930319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.930445 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:37:14.930473 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:14.930525 | orchestrator | 2026-03-19 04:37:14.930544 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:37:14.930563 | orchestrator | Thursday 19 March 2026 04:37:14 +0000 (0:00:01.125) 0:01:07.824 ******** 2026-03-19 04:37:14.930679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930727 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930766 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930816 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:14.930934 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092700 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:15.092720 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092733 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092743 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092769 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092780 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092808 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092846 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092859 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.092883 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399640 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:15.399720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399733 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399738 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399760 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399786 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399811 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399820 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399827 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.399831 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:15.399839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.514516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.590884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.590988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.591170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725036 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725321 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:15.725341 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.725388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806897 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.806988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807049 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.807118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876740 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:15.876775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876805 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876879 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876904 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876917 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:15.876944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.252934 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253044 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253062 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:19.253077 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253089 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253101 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253157 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a587d6ca', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part16', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part14', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part15', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part1', 'scsi-SQEMU_QEMU_HARDDISK_a587d6ca-13b1-4767-8b95-b15cf08fcf75-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253204 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253250 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:37:19.253273 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:19.253285 | orchestrator | 2026-03-19 04:37:19.253297 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:37:19.253310 | orchestrator | Thursday 19 March 2026 04:37:16 +0000 (0:00:01.438) 0:01:09.263 ******** 2026-03-19 04:37:19.253321 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:19.253333 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:19.253344 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:19.253354 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:19.253365 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:19.253376 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:19.253387 | orchestrator | ok: [testbed-manager] 2026-03-19 04:37:19.253398 | orchestrator | 2026-03-19 04:37:19.253409 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:37:19.253420 | orchestrator | Thursday 19 March 2026 04:37:17 +0000 (0:00:01.299) 0:01:10.562 ******** 2026-03-19 04:37:19.253433 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:19.253445 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:19.253457 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:19.253469 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:19.253481 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:19.253493 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:19.253505 | orchestrator | ok: [testbed-manager] 2026-03-19 04:37:19.253517 | orchestrator | 2026-03-19 04:37:19.253529 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:37:19.253541 | orchestrator | Thursday 19 March 2026 04:37:18 +0000 (0:00:00.718) 0:01:11.280 ******** 2026-03-19 04:37:19.253553 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:37:19.253566 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:37:19.253614 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:37:19.253626 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:19.253639 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:19.253651 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:19.253664 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:19.253676 | orchestrator | 2026-03-19 04:37:19.253689 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:37:19.253710 | orchestrator | Thursday 19 March 2026 04:37:19 +0000 (0:00:01.222) 0:01:12.503 ******** 2026-03-19 04:37:31.373015 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:31.373123 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:31.373135 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:31.373157 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373165 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373172 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373179 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:31.373187 | orchestrator | 2026-03-19 04:37:31.373196 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:37:31.373205 | orchestrator | Thursday 19 March 2026 04:37:19 +0000 (0:00:00.709) 0:01:13.213 ******** 2026-03-19 04:37:31.373212 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:31.373218 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:31.373224 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:31.373231 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373238 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373245 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373252 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-19 04:37:31.373259 | orchestrator | 2026-03-19 04:37:31.373266 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:37:31.373273 | orchestrator | Thursday 19 March 2026 04:37:21 +0000 (0:00:01.546) 0:01:14.759 ******** 2026-03-19 04:37:31.373280 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:31.373287 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:31.373293 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:31.373301 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373331 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373338 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373345 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:31.373352 | orchestrator | 2026-03-19 04:37:31.373358 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:37:31.373365 | orchestrator | Thursday 19 March 2026 04:37:22 +0000 (0:00:00.746) 0:01:15.506 ******** 2026-03-19 04:37:31.373372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:37:31.373379 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-19 04:37:31.373385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:37:31.373392 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-19 04:37:31.373399 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:37:31.373405 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:37:31.373411 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-19 04:37:31.373417 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 04:37:31.373424 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-19 04:37:31.373431 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 04:37:31.373437 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 04:37:31.373443 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 04:37:31.373450 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:37:31.373456 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 04:37:31.373463 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 04:37:31.373469 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 04:37:31.373475 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 04:37:31.373482 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-19 04:37:31.373488 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 04:37:31.373494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-19 04:37:31.373500 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-19 04:37:31.373507 | orchestrator | 2026-03-19 04:37:31.373513 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:37:31.373520 | orchestrator | Thursday 19 March 2026 04:37:24 +0000 (0:00:01.951) 0:01:17.458 ******** 2026-03-19 04:37:31.373527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:37:31.373534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:37:31.373560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:37:31.373570 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:31.373577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:37:31.373584 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:37:31.373590 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:37:31.373597 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:31.373604 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:37:31.373611 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:37:31.373617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:37:31.373624 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:31.373631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 04:37:31.373637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 04:37:31.373645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 04:37:31.373652 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 04:37:31.373666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 04:37:31.373682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 04:37:31.373689 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373696 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 04:37:31.373703 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 04:37:31.373710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 04:37:31.373716 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373742 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-19 04:37:31.373749 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-19 04:37:31.373762 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-19 04:37:31.373769 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:31.373776 | orchestrator | 2026-03-19 04:37:31.373782 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:37:31.373789 | orchestrator | Thursday 19 March 2026 04:37:25 +0000 (0:00:01.054) 0:01:18.512 ******** 2026-03-19 04:37:31.373796 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:37:31.373802 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:37:31.373809 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:37:31.373816 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:37:31.373824 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:37:31.373831 | orchestrator | 2026-03-19 04:37:31.373838 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:37:31.373846 | orchestrator | Thursday 19 March 2026 04:37:26 +0000 (0:00:00.965) 0:01:19.478 ******** 2026-03-19 04:37:31.373853 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373859 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373866 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373872 | orchestrator | 2026-03-19 04:37:31.373879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:37:31.373886 | orchestrator | Thursday 19 March 2026 04:37:26 +0000 (0:00:00.532) 0:01:20.010 ******** 2026-03-19 04:37:31.373893 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373899 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373906 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373912 | orchestrator | 2026-03-19 04:37:31.373919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:37:31.373926 | orchestrator | Thursday 19 March 2026 04:37:27 +0000 (0:00:00.355) 0:01:20.366 ******** 2026-03-19 04:37:31.373933 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.373940 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:37:31.373947 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:37:31.373954 | orchestrator | 2026-03-19 04:37:31.373961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:37:31.373967 | orchestrator | Thursday 19 March 2026 04:37:27 +0000 (0:00:00.332) 0:01:20.698 ******** 2026-03-19 04:37:31.373975 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:31.373984 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:31.373991 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:31.373998 | orchestrator | 2026-03-19 04:37:31.374005 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:37:31.374012 | orchestrator | Thursday 19 March 2026 04:37:27 +0000 (0:00:00.402) 0:01:21.101 ******** 2026-03-19 04:37:31.374062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:37:31.374070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:37:31.374076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:37:31.374083 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.374090 | orchestrator | 2026-03-19 04:37:31.374097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:37:31.374112 | orchestrator | Thursday 19 March 2026 04:37:28 +0000 (0:00:00.387) 0:01:21.489 ******** 2026-03-19 04:37:31.374119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:37:31.374126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:37:31.374132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:37:31.374139 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.374146 | orchestrator | 2026-03-19 04:37:31.374153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:37:31.374160 | orchestrator | Thursday 19 March 2026 04:37:28 +0000 (0:00:00.651) 0:01:22.140 ******** 2026-03-19 04:37:31.374166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:37:31.374173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:37:31.374179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:37:31.374186 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:37:31.374193 | orchestrator | 2026-03-19 04:37:31.374200 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:37:31.374207 | orchestrator | Thursday 19 March 2026 04:37:29 +0000 (0:00:00.639) 0:01:22.779 ******** 2026-03-19 04:37:31.374214 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:37:31.374221 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:37:31.374227 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:37:31.374233 | orchestrator | 2026-03-19 04:37:31.374240 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:37:31.374246 | orchestrator | Thursday 19 March 2026 04:37:30 +0000 (0:00:00.557) 0:01:23.336 ******** 2026-03-19 04:37:31.374253 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 04:37:31.374260 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 04:37:31.374266 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 04:37:31.374273 | orchestrator | 2026-03-19 04:37:31.374280 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:37:31.374286 | orchestrator | Thursday 19 March 2026 04:37:30 +0000 (0:00:00.537) 0:01:23.874 ******** 2026-03-19 04:37:31.374293 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:37:31.374300 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:37:31.374308 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:37:31.374315 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:37:31.374333 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:38:08.441005 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:38:08.441141 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:38:08.441159 | orchestrator | 2026-03-19 04:38:08.441173 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:38:08.441185 | orchestrator | Thursday 19 March 2026 04:37:31 +0000 (0:00:00.748) 0:01:24.622 ******** 2026-03-19 04:38:08.441197 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:38:08.441209 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:38:08.441220 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:38:08.441231 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:38:08.441242 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:38:08.441253 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:38:08.441263 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:38:08.441274 | orchestrator | 2026-03-19 04:38:08.441309 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-19 04:38:08.441321 | orchestrator | Thursday 19 March 2026 04:37:33 +0000 (0:00:02.129) 0:01:26.751 ******** 2026-03-19 04:38:08.441332 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:38:08.441344 | orchestrator | changed: [testbed-manager] 2026-03-19 04:38:08.441355 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:38:08.441365 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:38:08.441376 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:38:08.441387 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:38:08.441398 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:38:08.441408 | orchestrator | 2026-03-19 04:38:08.441419 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-19 04:38:08.441430 | orchestrator | Thursday 19 March 2026 04:37:52 +0000 (0:00:18.805) 0:01:45.557 ******** 2026-03-19 04:38:08.441440 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.441451 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.441527 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.441544 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.441558 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.441570 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.441583 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.441595 | orchestrator | 2026-03-19 04:38:08.441608 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-19 04:38:08.441620 | orchestrator | Thursday 19 March 2026 04:37:53 +0000 (0:00:00.914) 0:01:46.471 ******** 2026-03-19 04:38:08.441633 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.441645 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.441658 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.441670 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.441683 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.441696 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.441708 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.441720 | orchestrator | 2026-03-19 04:38:08.441732 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-19 04:38:08.441745 | orchestrator | Thursday 19 March 2026 04:37:53 +0000 (0:00:00.722) 0:01:47.193 ******** 2026-03-19 04:38:08.441757 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.441770 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:38:08.441783 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:38:08.441795 | orchestrator | changed: [testbed-node-3] 2026-03-19 04:38:08.441808 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:38:08.441820 | orchestrator | changed: [testbed-node-4] 2026-03-19 04:38:08.441833 | orchestrator | changed: [testbed-node-5] 2026-03-19 04:38:08.441845 | orchestrator | 2026-03-19 04:38:08.441858 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-19 04:38:08.441871 | orchestrator | Thursday 19 March 2026 04:37:56 +0000 (0:00:02.252) 0:01:49.446 ******** 2026-03-19 04:38:08.441885 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-19 04:38:08.441899 | orchestrator | 2026-03-19 04:38:08.441910 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-19 04:38:08.441921 | orchestrator | Thursday 19 March 2026 04:37:58 +0000 (0:00:02.006) 0:01:51.452 ******** 2026-03-19 04:38:08.441932 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.441942 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.441953 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.441964 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.441974 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.441985 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.441995 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442006 | orchestrator | 2026-03-19 04:38:08.442080 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-19 04:38:08.442106 | orchestrator | Thursday 19 March 2026 04:37:58 +0000 (0:00:00.740) 0:01:52.193 ******** 2026-03-19 04:38:08.442117 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442128 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442138 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442149 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442160 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442170 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442181 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442192 | orchestrator | 2026-03-19 04:38:08.442203 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-19 04:38:08.442213 | orchestrator | Thursday 19 March 2026 04:37:59 +0000 (0:00:01.008) 0:01:53.202 ******** 2026-03-19 04:38:08.442224 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442255 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442266 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442277 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442288 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442307 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442318 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442329 | orchestrator | 2026-03-19 04:38:08.442340 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-19 04:38:08.442351 | orchestrator | Thursday 19 March 2026 04:38:00 +0000 (0:00:00.767) 0:01:53.970 ******** 2026-03-19 04:38:08.442362 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442372 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442383 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442394 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442404 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442415 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442426 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442436 | orchestrator | 2026-03-19 04:38:08.442493 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-19 04:38:08.442512 | orchestrator | Thursday 19 March 2026 04:38:01 +0000 (0:00:00.955) 0:01:54.926 ******** 2026-03-19 04:38:08.442531 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442550 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442567 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442584 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442595 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442606 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442617 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442628 | orchestrator | 2026-03-19 04:38:08.442638 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-19 04:38:08.442649 | orchestrator | Thursday 19 March 2026 04:38:02 +0000 (0:00:00.766) 0:01:55.693 ******** 2026-03-19 04:38:08.442660 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442670 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442681 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442691 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442702 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442713 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442723 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442734 | orchestrator | 2026-03-19 04:38:08.442745 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-19 04:38:08.442755 | orchestrator | Thursday 19 March 2026 04:38:03 +0000 (0:00:00.939) 0:01:56.632 ******** 2026-03-19 04:38:08.442766 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442777 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442788 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442798 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442809 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442832 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442843 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442854 | orchestrator | 2026-03-19 04:38:08.442864 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-19 04:38:08.442875 | orchestrator | Thursday 19 March 2026 04:38:04 +0000 (0:00:00.721) 0:01:57.353 ******** 2026-03-19 04:38:08.442886 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.442896 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.442907 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.442932 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.442954 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.442965 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.442975 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.442986 | orchestrator | 2026-03-19 04:38:08.442997 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-19 04:38:08.443008 | orchestrator | Thursday 19 March 2026 04:38:05 +0000 (0:00:00.995) 0:01:58.348 ******** 2026-03-19 04:38:08.443018 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.443029 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.443039 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.443050 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.443061 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.443071 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.443082 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.443092 | orchestrator | 2026-03-19 04:38:08.443103 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-19 04:38:08.443114 | orchestrator | Thursday 19 March 2026 04:38:06 +0000 (0:00:00.949) 0:01:59.298 ******** 2026-03-19 04:38:08.443124 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.443135 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.443145 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.443156 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.443166 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.443177 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.443188 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.443198 | orchestrator | 2026-03-19 04:38:08.443209 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-19 04:38:08.443267 | orchestrator | Thursday 19 March 2026 04:38:06 +0000 (0:00:00.729) 0:02:00.027 ******** 2026-03-19 04:38:08.443280 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.443291 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.443302 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.443313 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.443324 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.443335 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.443345 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.443356 | orchestrator | 2026-03-19 04:38:08.443367 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-19 04:38:08.443378 | orchestrator | Thursday 19 March 2026 04:38:07 +0000 (0:00:00.930) 0:02:00.958 ******** 2026-03-19 04:38:08.443389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:08.443399 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:08.443410 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:08.443421 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:08.443432 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:08.443443 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:08.443453 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:08.443531 | orchestrator | 2026-03-19 04:38:08.443567 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-19 04:38:17.567998 | orchestrator | Thursday 19 March 2026 04:38:08 +0000 (0:00:00.732) 0:02:01.690 ******** 2026-03-19 04:38:17.568101 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568136 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568146 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 04:38:17.568167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 04:38:17.568176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 04:38:17.568184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 04:38:17.568192 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 04:38:17.568210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 04:38:17.568219 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568227 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568235 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568244 | orchestrator | 2026-03-19 04:38:17.568253 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-19 04:38:17.568262 | orchestrator | Thursday 19 March 2026 04:38:09 +0000 (0:00:01.059) 0:02:02.750 ******** 2026-03-19 04:38:17.568269 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568277 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568285 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568293 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568301 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568311 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568316 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568321 | orchestrator | 2026-03-19 04:38:17.568326 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-19 04:38:17.568332 | orchestrator | Thursday 19 March 2026 04:38:10 +0000 (0:00:00.756) 0:02:03.506 ******** 2026-03-19 04:38:17.568337 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568342 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568347 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568352 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568357 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568363 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568368 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568373 | orchestrator | 2026-03-19 04:38:17.568378 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-19 04:38:17.568383 | orchestrator | Thursday 19 March 2026 04:38:11 +0000 (0:00:00.980) 0:02:04.487 ******** 2026-03-19 04:38:17.568388 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568393 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568401 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568410 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568423 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568432 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568440 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568495 | orchestrator | 2026-03-19 04:38:17.568504 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-19 04:38:17.568512 | orchestrator | Thursday 19 March 2026 04:38:11 +0000 (0:00:00.762) 0:02:05.249 ******** 2026-03-19 04:38:17.568522 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568528 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568546 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568552 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568558 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568564 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568570 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568576 | orchestrator | 2026-03-19 04:38:17.568582 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-19 04:38:17.568588 | orchestrator | Thursday 19 March 2026 04:38:12 +0000 (0:00:00.975) 0:02:06.225 ******** 2026-03-19 04:38:17.568594 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568600 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568606 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568612 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568618 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568624 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568630 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568636 | orchestrator | 2026-03-19 04:38:17.568642 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-19 04:38:17.568647 | orchestrator | Thursday 19 March 2026 04:38:13 +0000 (0:00:00.925) 0:02:07.150 ******** 2026-03-19 04:38:17.568652 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568657 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568662 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568667 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568672 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568677 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568682 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568687 | orchestrator | 2026-03-19 04:38:17.568692 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-19 04:38:17.568698 | orchestrator | Thursday 19 March 2026 04:38:14 +0000 (0:00:00.739) 0:02:07.890 ******** 2026-03-19 04:38:17.568726 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:17.568739 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:17.568754 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:17.568763 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:17.568772 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:38:17.568781 | orchestrator | 2026-03-19 04:38:17.568788 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-19 04:38:17.568796 | orchestrator | Thursday 19 March 2026 04:38:16 +0000 (0:00:01.555) 0:02:09.445 ******** 2026-03-19 04:38:17.568804 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:38:17.568814 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:38:17.568822 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:38:17.568830 | orchestrator | 2026-03-19 04:38:17.568837 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-19 04:38:17.568845 | orchestrator | Thursday 19 March 2026 04:38:16 +0000 (0:00:00.399) 0:02:09.844 ******** 2026-03-19 04:38:17.568853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 04:38:17.568861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 04:38:17.568870 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 04:38:17.568887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 04:38:17.568895 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.568904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 04:38:17.568920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 04:38:17.568929 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.568937 | orchestrator | 2026-03-19 04:38:17.568946 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-19 04:38:17.568955 | orchestrator | Thursday 19 March 2026 04:38:16 +0000 (0:00:00.353) 0:02:10.198 ******** 2026-03-19 04:38:17.568966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.568977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.568986 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:17.568994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.569003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.569012 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:17.569021 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.569030 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:17.569039 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:17.569048 | orchestrator | 2026-03-19 04:38:17.569064 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-19 04:38:20.586388 | orchestrator | Thursday 19 March 2026 04:38:17 +0000 (0:00:00.616) 0:02:10.815 ******** 2026-03-19 04:38:20.586512 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:20.586523 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:20.586530 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:20.586536 | orchestrator | 2026-03-19 04:38:20.586542 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-19 04:38:20.586547 | orchestrator | Thursday 19 March 2026 04:38:17 +0000 (0:00:00.336) 0:02:11.152 ******** 2026-03-19 04:38:20.586553 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:20.586559 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:20.586564 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:20.586570 | orchestrator | 2026-03-19 04:38:20.586575 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-19 04:38:20.586581 | orchestrator | Thursday 19 March 2026 04:38:18 +0000 (0:00:00.325) 0:02:11.477 ******** 2026-03-19 04:38:20.586604 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:20.586609 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:20.586624 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:20.586629 | orchestrator | 2026-03-19 04:38:20.586634 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-19 04:38:20.586640 | orchestrator | Thursday 19 March 2026 04:38:18 +0000 (0:00:00.289) 0:02:11.766 ******** 2026-03-19 04:38:20.586652 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:20.586657 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:20.586663 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:20.586668 | orchestrator | 2026-03-19 04:38:20.586674 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-19 04:38:20.586679 | orchestrator | Thursday 19 March 2026 04:38:18 +0000 (0:00:00.313) 0:02:12.080 ******** 2026-03-19 04:38:20.586685 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}) 2026-03-19 04:38:20.586692 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}) 2026-03-19 04:38:20.586698 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}) 2026-03-19 04:38:20.586703 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}) 2026-03-19 04:38:20.586709 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}) 2026-03-19 04:38:20.586714 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}) 2026-03-19 04:38:20.586720 | orchestrator | 2026-03-19 04:38:20.586726 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-19 04:38:20.586732 | orchestrator | Thursday 19 March 2026 04:38:20 +0000 (0:00:01.372) 0:02:13.452 ******** 2026-03-19 04:38:20.586741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9/osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1773887672.5562842, 'mtime': 1773887672.552284, 'ctime': 1773887672.552284, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9/osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:20.586778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e/osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1773887692.841646, 'mtime': 1773887692.8356457, 'ctime': 1773887692.8356457, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e/osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:20.586798 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:20.586805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-b653c337-740c-52f4-bc46-3e8e37039a81/osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1773887672.673467, 'mtime': 1773887672.6694667, 'ctime': 1773887672.6694667, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-b653c337-740c-52f4-bc46-3e8e37039a81/osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:20.586811 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8/osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1773887693.8948524, 'mtime': 1773887693.8898523, 'ctime': 1773887693.8898523, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8/osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:20.586817 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:20.586832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758/osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1773887672.1809728, 'mtime': 1773887672.1769729, 'ctime': 1773887672.1769729, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758/osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-eb497169-2d92-5217-a604-0fdb844d53ba/osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1773887693.1003456, 'mtime': 1773887693.0953455, 'ctime': 1773887693.0953455, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-eb497169-2d92-5217-a604-0fdb844d53ba/osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238817 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:22.238826 | orchestrator | 2026-03-19 04:38:22.238832 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-19 04:38:22.238838 | orchestrator | Thursday 19 March 2026 04:38:20 +0000 (0:00:00.388) 0:02:13.840 ******** 2026-03-19 04:38:22.238844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 04:38:22.238850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 04:38:22.238856 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:22.238861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 04:38:22.238866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 04:38:22.238871 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:22.238876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 04:38:22.238880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 04:38:22.238885 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:22.238890 | orchestrator | 2026-03-19 04:38:22.238895 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-19 04:38:22.238919 | orchestrator | Thursday 19 March 2026 04:38:20 +0000 (0:00:00.350) 0:02:14.191 ******** 2026-03-19 04:38:22.238926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238949 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:22.238955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238975 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:22.238980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.238991 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:22.238996 | orchestrator | 2026-03-19 04:38:22.239001 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-19 04:38:22.239006 | orchestrator | Thursday 19 March 2026 04:38:21 +0000 (0:00:00.356) 0:02:14.547 ******** 2026-03-19 04:38:22.239011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'})  2026-03-19 04:38:22.239016 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'})  2026-03-19 04:38:22.239021 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:22.239026 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'})  2026-03-19 04:38:22.239030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'})  2026-03-19 04:38:22.239035 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:22.239040 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'})  2026-03-19 04:38:22.239045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'})  2026-03-19 04:38:22.239055 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:22.239060 | orchestrator | 2026-03-19 04:38:22.239065 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-19 04:38:22.239071 | orchestrator | Thursday 19 March 2026 04:38:21 +0000 (0:00:00.562) 0:02:15.110 ******** 2026-03-19 04:38:22.239076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-55f97389-0425-5b31-8593-f3b3ad53d7f9', 'data_vg': 'ceph-55f97389-0425-5b31-8593-f3b3ad53d7f9'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.239081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-432058d8-20d3-534b-84ac-2a35b6cfcd9e', 'data_vg': 'ceph-432058d8-20d3-534b-84ac-2a35b6cfcd9e'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.239086 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:22.239091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-b653c337-740c-52f4-bc46-3e8e37039a81', 'data_vg': 'ceph-b653c337-740c-52f4-bc46-3e8e37039a81'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.239104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8', 'data_vg': 'ceph-a2eacdaa-bff5-5a13-b9a9-6af0c62255c8'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.239110 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:22.239115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ab7a01d4-aa20-5ffe-8eee-b634151ce758', 'data_vg': 'ceph-ab7a01d4-aa20-5ffe-8eee-b634151ce758'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:22.239123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-eb497169-2d92-5217-a604-0fdb844d53ba', 'data_vg': 'ceph-eb497169-2d92-5217-a604-0fdb844d53ba'}, 'ansible_loop_var': 'item'})  2026-03-19 04:38:26.344138 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:26.344289 | orchestrator | 2026-03-19 04:38:26.344312 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-19 04:38:26.344328 | orchestrator | Thursday 19 March 2026 04:38:22 +0000 (0:00:00.382) 0:02:15.492 ******** 2026-03-19 04:38:26.344360 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:26.344387 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:26.344401 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:26.344415 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:26.344458 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:26.344473 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:26.344486 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:26.344511 | orchestrator | 2026-03-19 04:38:26.344526 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-19 04:38:26.344540 | orchestrator | Thursday 19 March 2026 04:38:22 +0000 (0:00:00.719) 0:02:16.211 ******** 2026-03-19 04:38:26.344554 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:26.344567 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:26.344580 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:26.344594 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:26.344608 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 04:38:26.344622 | orchestrator | 2026-03-19 04:38:26.344662 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-19 04:38:26.344676 | orchestrator | Thursday 19 March 2026 04:38:24 +0000 (0:00:01.566) 0:02:17.777 ******** 2026-03-19 04:38:26.344691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344764 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:26.344778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344848 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:26.344862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.344947 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:26.344961 | orchestrator | 2026-03-19 04:38:26.344975 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-19 04:38:26.344990 | orchestrator | Thursday 19 March 2026 04:38:24 +0000 (0:00:00.377) 0:02:18.154 ******** 2026-03-19 04:38:26.345003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345171 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:26.345185 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:26.345197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345263 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:26.345276 | orchestrator | 2026-03-19 04:38:26.345289 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-19 04:38:26.345303 | orchestrator | Thursday 19 March 2026 04:38:25 +0000 (0:00:00.647) 0:02:18.802 ******** 2026-03-19 04:38:26.345317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345383 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:26.345396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345518 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:26.345537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 04:38:26.345609 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:26.345621 | orchestrator | 2026-03-19 04:38:26.345633 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-19 04:38:26.345644 | orchestrator | Thursday 19 March 2026 04:38:25 +0000 (0:00:00.428) 0:02:19.230 ******** 2026-03-19 04:38:26.345656 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:26.345668 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:26.345695 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.627545 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.627666 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.627678 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.627690 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.627701 | orchestrator | 2026-03-19 04:38:32.627712 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-19 04:38:32.627725 | orchestrator | Thursday 19 March 2026 04:38:26 +0000 (0:00:00.704) 0:02:19.935 ******** 2026-03-19 04:38:32.627736 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.627746 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.627757 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.627768 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.627777 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.627789 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.627798 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.627809 | orchestrator | 2026-03-19 04:38:32.627818 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-19 04:38:32.627825 | orchestrator | Thursday 19 March 2026 04:38:27 +0000 (0:00:00.958) 0:02:20.894 ******** 2026-03-19 04:38:32.627832 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.627838 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.627844 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.627851 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.627857 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.627863 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.627870 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.627876 | orchestrator | 2026-03-19 04:38:32.627883 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-19 04:38:32.627890 | orchestrator | Thursday 19 March 2026 04:38:28 +0000 (0:00:00.717) 0:02:21.612 ******** 2026-03-19 04:38:32.627896 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.627902 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.627908 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.627914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.627920 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.627926 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.627933 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.627939 | orchestrator | 2026-03-19 04:38:32.627945 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-19 04:38:32.627953 | orchestrator | Thursday 19 March 2026 04:38:29 +0000 (0:00:00.941) 0:02:22.553 ******** 2026-03-19 04:38:32.627959 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.627966 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.627972 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.627978 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.627984 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.628016 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.628024 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.628031 | orchestrator | 2026-03-19 04:38:32.628038 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-19 04:38:32.628046 | orchestrator | Thursday 19 March 2026 04:38:30 +0000 (0:00:00.913) 0:02:23.467 ******** 2026-03-19 04:38:32.628053 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.628060 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.628067 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.628074 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.628081 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.628088 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.628095 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.628101 | orchestrator | 2026-03-19 04:38:32.628108 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-19 04:38:32.628115 | orchestrator | Thursday 19 March 2026 04:38:30 +0000 (0:00:00.746) 0:02:24.214 ******** 2026-03-19 04:38:32.628122 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.628129 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.628136 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.628143 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:32.628150 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:32.628156 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:32.628163 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:32.628170 | orchestrator | 2026-03-19 04:38:32.628178 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-19 04:38:32.628185 | orchestrator | Thursday 19 March 2026 04:38:31 +0000 (0:00:00.957) 0:02:25.171 ******** 2026-03-19 04:38:32.628194 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:32.628219 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:32.628229 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:32.628239 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:32.628248 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:32.628258 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:32.628265 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:32.628288 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:32.628296 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:32.628303 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:32.628310 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:32.628318 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:32.628330 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:32.628338 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:32.628345 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:32.628352 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:32.628360 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:32.628367 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:32.628374 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:32.628382 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:32.628388 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:32.628394 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:32.628400 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:32.628407 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:32.628435 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:32.628441 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:32.628452 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:32.628458 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:32.628464 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:32.628470 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:32.628482 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.541156 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.541330 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:34.541349 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.541363 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.541379 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.541391 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.541404 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.541472 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.541484 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.541496 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.541507 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:34.541518 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.541530 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.541549 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:34.541568 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.541588 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.541605 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.541624 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:34.541643 | orchestrator | 2026-03-19 04:38:34.541661 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-19 04:38:34.541682 | orchestrator | Thursday 19 March 2026 04:38:32 +0000 (0:00:00.976) 0:02:26.148 ******** 2026-03-19 04:38:34.541700 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:34.541718 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:34.541738 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:34.541758 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:34.541776 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:34.541817 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:34.541830 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:34.541841 | orchestrator | 2026-03-19 04:38:34.541852 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-19 04:38:34.541863 | orchestrator | Thursday 19 March 2026 04:38:33 +0000 (0:00:00.961) 0:02:27.110 ******** 2026-03-19 04:38:34.541888 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.541899 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.541910 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.541921 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.541955 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.541967 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.541978 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:34.541989 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.542000 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.542011 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.542098 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.542110 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.542122 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.542133 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:34.542144 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.542155 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.542165 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.542177 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.542187 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.542199 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:34.542210 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:34.542229 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.542239 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.542257 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:34.542268 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:34.542280 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:34.542291 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:34.542302 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:34.542322 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:49.688778 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:49.688908 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:49.688935 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:49.688956 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.688978 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:49.688999 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:49.689018 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-19 04:38:49.689037 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-19 04:38:49.689058 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:49.689077 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:49.689095 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.689107 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-19 04:38:49.689118 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:49.689172 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:49.689194 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:49.689213 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.689233 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-19 04:38:49.689252 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-19 04:38:49.689287 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-19 04:38:49.689309 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.689327 | orchestrator | 2026-03-19 04:38:49.689347 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-19 04:38:49.689368 | orchestrator | Thursday 19 March 2026 04:38:34 +0000 (0:00:00.955) 0:02:28.066 ******** 2026-03-19 04:38:49.689446 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.689466 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.689482 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.689495 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.689507 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.689519 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.689532 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.689544 | orchestrator | 2026-03-19 04:38:49.689557 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-19 04:38:49.689569 | orchestrator | Thursday 19 March 2026 04:38:35 +0000 (0:00:00.969) 0:02:29.035 ******** 2026-03-19 04:38:49.689581 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.689593 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.689605 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.689617 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.689629 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.689641 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.689654 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.689665 | orchestrator | 2026-03-19 04:38:49.689676 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-19 04:38:49.689708 | orchestrator | Thursday 19 March 2026 04:38:36 +0000 (0:00:00.924) 0:02:29.960 ******** 2026-03-19 04:38:49.689720 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.689730 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.689741 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.689752 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.689762 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.689773 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.689784 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.689794 | orchestrator | 2026-03-19 04:38:49.689805 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-19 04:38:49.689816 | orchestrator | Thursday 19 March 2026 04:38:38 +0000 (0:00:01.468) 0:02:31.429 ******** 2026-03-19 04:38:49.689827 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-19 04:38:49.689840 | orchestrator | 2026-03-19 04:38:49.689851 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-19 04:38:49.689873 | orchestrator | Thursday 19 March 2026 04:38:39 +0000 (0:00:01.821) 0:02:33.251 ******** 2026-03-19 04:38:49.689884 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689895 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689906 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689916 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689927 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689937 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689948 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-19 04:38:49.689958 | orchestrator | 2026-03-19 04:38:49.689969 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-19 04:38:49.689979 | orchestrator | Thursday 19 March 2026 04:38:40 +0000 (0:00:00.905) 0:02:34.156 ******** 2026-03-19 04:38:49.689990 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.690000 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.690011 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.690080 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.690092 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.690102 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.690113 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.690124 | orchestrator | 2026-03-19 04:38:49.690134 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-19 04:38:49.690145 | orchestrator | Thursday 19 March 2026 04:38:41 +0000 (0:00:00.990) 0:02:35.146 ******** 2026-03-19 04:38:49.690156 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.690167 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.690177 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.690189 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.690199 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.690210 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.690221 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.690232 | orchestrator | 2026-03-19 04:38:49.690242 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-19 04:38:49.690253 | orchestrator | Thursday 19 March 2026 04:38:42 +0000 (0:00:00.761) 0:02:35.908 ******** 2026-03-19 04:38:49.690289 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:38:49.690316 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:38:49.690327 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:38:49.690338 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:38:49.690349 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:38:49.690359 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:38:49.690370 | orchestrator | ok: [testbed-manager] 2026-03-19 04:38:49.690408 | orchestrator | 2026-03-19 04:38:49.690427 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-19 04:38:49.690443 | orchestrator | Thursday 19 March 2026 04:38:44 +0000 (0:00:01.371) 0:02:37.279 ******** 2026-03-19 04:38:49.690460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.690488 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.690506 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.690524 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.690542 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.690560 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.690577 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.690594 | orchestrator | 2026-03-19 04:38:49.690614 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-19 04:38:49.690632 | orchestrator | Thursday 19 March 2026 04:38:45 +0000 (0:00:01.452) 0:02:38.731 ******** 2026-03-19 04:38:49.690658 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.690669 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:38:49.690685 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:38:49.690708 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:38:49.690727 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:38:49.690744 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:38:49.690761 | orchestrator | skipping: [testbed-manager] 2026-03-19 04:38:49.690780 | orchestrator | 2026-03-19 04:38:49.690799 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-19 04:38:49.690818 | orchestrator | Thursday 19 March 2026 04:38:46 +0000 (0:00:01.440) 0:02:40.172 ******** 2026-03-19 04:38:49.690836 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:38:49.690855 | orchestrator | 2026-03-19 04:38:49.690873 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-19 04:38:49.690893 | orchestrator | Thursday 19 March 2026 04:38:48 +0000 (0:00:01.840) 0:02:42.012 ******** 2026-03-19 04:38:49.690911 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:38:49.690930 | orchestrator | 2026-03-19 04:38:49.690963 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-19 04:39:08.553677 | orchestrator | 2026-03-19 04:39:08.553798 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:39:08.553811 | orchestrator | Thursday 19 March 2026 04:38:49 +0000 (0:00:00.928) 0:02:42.940 ******** 2026-03-19 04:39:08.553819 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.553827 | orchestrator | 2026-03-19 04:39:08.553833 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:39:08.553840 | orchestrator | Thursday 19 March 2026 04:38:50 +0000 (0:00:00.483) 0:02:43.424 ******** 2026-03-19 04:39:08.553847 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.553854 | orchestrator | 2026-03-19 04:39:08.553861 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-19 04:39:08.553868 | orchestrator | Thursday 19 March 2026 04:38:50 +0000 (0:00:00.496) 0:02:43.920 ******** 2026-03-19 04:39:08.553876 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:39:08.553884 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:39:08.553891 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 04:39:08.553898 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 04:39:08.553906 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 04:39:08.553935 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}])  2026-03-19 04:39:08.553944 | orchestrator | 2026-03-19 04:39:08.553951 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-19 04:39:08.553957 | orchestrator | 2026-03-19 04:39:08.553996 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-19 04:39:08.554003 | orchestrator | Thursday 19 March 2026 04:39:01 +0000 (0:00:10.352) 0:02:54.273 ******** 2026-03-19 04:39:08.554009 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554061 | orchestrator | 2026-03-19 04:39:08.554068 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-19 04:39:08.554078 | orchestrator | Thursday 19 March 2026 04:39:01 +0000 (0:00:00.546) 0:02:54.820 ******** 2026-03-19 04:39:08.554091 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554105 | orchestrator | 2026-03-19 04:39:08.554115 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-19 04:39:08.554125 | orchestrator | Thursday 19 March 2026 04:39:01 +0000 (0:00:00.144) 0:02:54.964 ******** 2026-03-19 04:39:08.554136 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:08.554147 | orchestrator | 2026-03-19 04:39:08.554156 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-19 04:39:08.554167 | orchestrator | Thursday 19 March 2026 04:39:01 +0000 (0:00:00.127) 0:02:55.091 ******** 2026-03-19 04:39:08.554177 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554187 | orchestrator | 2026-03-19 04:39:08.554213 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:39:08.554225 | orchestrator | Thursday 19 March 2026 04:39:01 +0000 (0:00:00.140) 0:02:55.232 ******** 2026-03-19 04:39:08.554235 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-19 04:39:08.554246 | orchestrator | 2026-03-19 04:39:08.554257 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:39:08.554288 | orchestrator | Thursday 19 March 2026 04:39:02 +0000 (0:00:00.241) 0:02:55.473 ******** 2026-03-19 04:39:08.554311 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554320 | orchestrator | 2026-03-19 04:39:08.554327 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:39:08.554335 | orchestrator | Thursday 19 March 2026 04:39:02 +0000 (0:00:00.482) 0:02:55.956 ******** 2026-03-19 04:39:08.554342 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554369 | orchestrator | 2026-03-19 04:39:08.554377 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:39:08.554384 | orchestrator | Thursday 19 March 2026 04:39:02 +0000 (0:00:00.126) 0:02:56.082 ******** 2026-03-19 04:39:08.554391 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554398 | orchestrator | 2026-03-19 04:39:08.554405 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:39:08.554412 | orchestrator | Thursday 19 March 2026 04:39:03 +0000 (0:00:00.481) 0:02:56.564 ******** 2026-03-19 04:39:08.554420 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554427 | orchestrator | 2026-03-19 04:39:08.554434 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:39:08.554441 | orchestrator | Thursday 19 March 2026 04:39:03 +0000 (0:00:00.356) 0:02:56.920 ******** 2026-03-19 04:39:08.554449 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554455 | orchestrator | 2026-03-19 04:39:08.554463 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:39:08.554470 | orchestrator | Thursday 19 March 2026 04:39:03 +0000 (0:00:00.132) 0:02:57.053 ******** 2026-03-19 04:39:08.554477 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554494 | orchestrator | 2026-03-19 04:39:08.554500 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:39:08.554507 | orchestrator | Thursday 19 March 2026 04:39:03 +0000 (0:00:00.149) 0:02:57.203 ******** 2026-03-19 04:39:08.554513 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:08.554520 | orchestrator | 2026-03-19 04:39:08.554526 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:39:08.554532 | orchestrator | Thursday 19 March 2026 04:39:04 +0000 (0:00:00.138) 0:02:57.341 ******** 2026-03-19 04:39:08.554538 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554544 | orchestrator | 2026-03-19 04:39:08.554550 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:39:08.554556 | orchestrator | Thursday 19 March 2026 04:39:04 +0000 (0:00:00.138) 0:02:57.480 ******** 2026-03-19 04:39:08.554563 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:08.554569 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:39:08.554575 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:39:08.554581 | orchestrator | 2026-03-19 04:39:08.554587 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:39:08.554593 | orchestrator | Thursday 19 March 2026 04:39:04 +0000 (0:00:00.630) 0:02:58.110 ******** 2026-03-19 04:39:08.554600 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:08.554606 | orchestrator | 2026-03-19 04:39:08.554612 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:39:08.554618 | orchestrator | Thursday 19 March 2026 04:39:05 +0000 (0:00:00.250) 0:02:58.361 ******** 2026-03-19 04:39:08.554624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:08.554630 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:39:08.554637 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:39:08.554643 | orchestrator | 2026-03-19 04:39:08.554649 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:39:08.554655 | orchestrator | Thursday 19 March 2026 04:39:07 +0000 (0:00:01.992) 0:03:00.354 ******** 2026-03-19 04:39:08.554661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:39:08.554668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:39:08.554674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:39:08.554680 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:08.554686 | orchestrator | 2026-03-19 04:39:08.554692 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:39:08.554704 | orchestrator | Thursday 19 March 2026 04:39:07 +0000 (0:00:00.405) 0:03:00.759 ******** 2026-03-19 04:39:08.554713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:39:08.554722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:39:08.554728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:39:08.554735 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:08.554741 | orchestrator | 2026-03-19 04:39:08.554751 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:39:08.554761 | orchestrator | Thursday 19 March 2026 04:39:08 +0000 (0:00:00.879) 0:03:01.639 ******** 2026-03-19 04:39:08.554793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.316144 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.316301 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.316320 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316366 | orchestrator | 2026-03-19 04:39:13.316387 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:39:13.316406 | orchestrator | Thursday 19 March 2026 04:39:08 +0000 (0:00:00.167) 0:03:01.806 ******** 2026-03-19 04:39:13.316427 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e6aaaabd2759', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:39:05.690471', 'end': '2026-03-19 04:39:05.740854', 'delta': '0:00:00.050383', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e6aaaabd2759'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:39:13.316451 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7d1c29d08d66', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:39:06.300427', 'end': '2026-03-19 04:39:06.359452', 'delta': '0:00:00.059025', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d1c29d08d66'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:39:13.316503 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '115813b5cae5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:39:06.868371', 'end': '2026-03-19 04:39:06.919957', 'delta': '0:00:00.051586', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['115813b5cae5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:39:13.316518 | orchestrator | 2026-03-19 04:39:13.316530 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:39:13.316562 | orchestrator | Thursday 19 March 2026 04:39:08 +0000 (0:00:00.197) 0:03:02.004 ******** 2026-03-19 04:39:13.316574 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:13.316585 | orchestrator | 2026-03-19 04:39:13.316596 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:39:13.316607 | orchestrator | Thursday 19 March 2026 04:39:08 +0000 (0:00:00.250) 0:03:02.254 ******** 2026-03-19 04:39:13.316617 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316628 | orchestrator | 2026-03-19 04:39:13.316639 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:39:13.316650 | orchestrator | Thursday 19 March 2026 04:39:09 +0000 (0:00:00.829) 0:03:03.084 ******** 2026-03-19 04:39:13.316661 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:13.316672 | orchestrator | 2026-03-19 04:39:13.316685 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:39:13.316698 | orchestrator | Thursday 19 March 2026 04:39:09 +0000 (0:00:00.144) 0:03:03.228 ******** 2026-03-19 04:39:13.316728 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-19 04:39:13.316741 | orchestrator | 2026-03-19 04:39:13.316753 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:39:13.316765 | orchestrator | Thursday 19 March 2026 04:39:11 +0000 (0:00:01.413) 0:03:04.642 ******** 2026-03-19 04:39:13.316791 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:13.316805 | orchestrator | 2026-03-19 04:39:13.316817 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:39:13.316829 | orchestrator | Thursday 19 March 2026 04:39:11 +0000 (0:00:00.155) 0:03:04.798 ******** 2026-03-19 04:39:13.316841 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316853 | orchestrator | 2026-03-19 04:39:13.316865 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:39:13.316877 | orchestrator | Thursday 19 March 2026 04:39:11 +0000 (0:00:00.131) 0:03:04.929 ******** 2026-03-19 04:39:13.316889 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316901 | orchestrator | 2026-03-19 04:39:13.316913 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:39:13.316925 | orchestrator | Thursday 19 March 2026 04:39:11 +0000 (0:00:00.231) 0:03:05.160 ******** 2026-03-19 04:39:13.316937 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316949 | orchestrator | 2026-03-19 04:39:13.316961 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:39:13.316973 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.145) 0:03:05.306 ******** 2026-03-19 04:39:13.316985 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.316998 | orchestrator | 2026-03-19 04:39:13.317009 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:39:13.317021 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.137) 0:03:05.443 ******** 2026-03-19 04:39:13.317033 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.317046 | orchestrator | 2026-03-19 04:39:13.317059 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:39:13.317069 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.129) 0:03:05.573 ******** 2026-03-19 04:39:13.317080 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.317091 | orchestrator | 2026-03-19 04:39:13.317101 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:39:13.317112 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.134) 0:03:05.708 ******** 2026-03-19 04:39:13.317123 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.317133 | orchestrator | 2026-03-19 04:39:13.317144 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:39:13.317155 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.129) 0:03:05.837 ******** 2026-03-19 04:39:13.317166 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.317184 | orchestrator | 2026-03-19 04:39:13.317195 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:39:13.317207 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.119) 0:03:05.957 ******** 2026-03-19 04:39:13.317222 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.317233 | orchestrator | 2026-03-19 04:39:13.317243 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:39:13.317254 | orchestrator | Thursday 19 March 2026 04:39:12 +0000 (0:00:00.131) 0:03:06.088 ******** 2026-03-19 04:39:13.317265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.317282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.317294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.317307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:39:13.317329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.571435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.571568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.571662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:39:13.571708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.571721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:39:13.571733 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:13.571792 | orchestrator | 2026-03-19 04:39:13.571819 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:39:13.571841 | orchestrator | Thursday 19 March 2026 04:39:13 +0000 (0:00:00.481) 0:03:06.570 ******** 2026-03-19 04:39:13.571890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.571907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.571931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.571952 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.571967 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.571980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:13.572003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:22.306218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:22.306427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:22.306447 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:39:22.306460 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306473 | orchestrator | 2026-03-19 04:39:22.306486 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:39:22.306498 | orchestrator | Thursday 19 March 2026 04:39:13 +0000 (0:00:00.251) 0:03:06.822 ******** 2026-03-19 04:39:22.306508 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:22.306520 | orchestrator | 2026-03-19 04:39:22.306531 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:39:22.306542 | orchestrator | Thursday 19 March 2026 04:39:14 +0000 (0:00:00.535) 0:03:07.357 ******** 2026-03-19 04:39:22.306553 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:22.306564 | orchestrator | 2026-03-19 04:39:22.306575 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:39:22.306613 | orchestrator | Thursday 19 March 2026 04:39:14 +0000 (0:00:00.138) 0:03:07.495 ******** 2026-03-19 04:39:22.306626 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:22.306637 | orchestrator | 2026-03-19 04:39:22.306647 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:39:22.306658 | orchestrator | Thursday 19 March 2026 04:39:14 +0000 (0:00:00.522) 0:03:08.017 ******** 2026-03-19 04:39:22.306671 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306685 | orchestrator | 2026-03-19 04:39:22.306697 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:39:22.306710 | orchestrator | Thursday 19 March 2026 04:39:14 +0000 (0:00:00.142) 0:03:08.160 ******** 2026-03-19 04:39:22.306722 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306734 | orchestrator | 2026-03-19 04:39:22.306746 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:39:22.306758 | orchestrator | Thursday 19 March 2026 04:39:15 +0000 (0:00:00.236) 0:03:08.397 ******** 2026-03-19 04:39:22.306771 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306783 | orchestrator | 2026-03-19 04:39:22.306795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:39:22.306806 | orchestrator | Thursday 19 March 2026 04:39:15 +0000 (0:00:00.152) 0:03:08.549 ******** 2026-03-19 04:39:22.306817 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:22.306828 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:39:22.306839 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:39:22.306850 | orchestrator | 2026-03-19 04:39:22.306860 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:39:22.306871 | orchestrator | Thursday 19 March 2026 04:39:16 +0000 (0:00:00.877) 0:03:09.427 ******** 2026-03-19 04:39:22.306882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:39:22.306893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:39:22.306904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:39:22.306914 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306925 | orchestrator | 2026-03-19 04:39:22.306936 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:39:22.306947 | orchestrator | Thursday 19 March 2026 04:39:16 +0000 (0:00:00.154) 0:03:09.581 ******** 2026-03-19 04:39:22.306957 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.306968 | orchestrator | 2026-03-19 04:39:22.306979 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:39:22.306990 | orchestrator | Thursday 19 March 2026 04:39:16 +0000 (0:00:00.138) 0:03:09.720 ******** 2026-03-19 04:39:22.307000 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:22.307011 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:39:22.307030 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:39:22.307041 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:39:22.307052 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:39:22.307063 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:39:22.307074 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:39:22.307085 | orchestrator | 2026-03-19 04:39:22.307096 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:39:22.307106 | orchestrator | Thursday 19 March 2026 04:39:17 +0000 (0:00:01.120) 0:03:10.841 ******** 2026-03-19 04:39:22.307117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:22.307128 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:39:22.307146 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:39:22.307157 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:39:22.307167 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:39:22.307178 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:39:22.307189 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:39:22.307200 | orchestrator | 2026-03-19 04:39:22.307211 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-19 04:39:22.307222 | orchestrator | Thursday 19 March 2026 04:39:19 +0000 (0:00:01.793) 0:03:12.634 ******** 2026-03-19 04:39:22.307233 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-19 04:39:22.307243 | orchestrator | 2026-03-19 04:39:22.307254 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-19 04:39:22.307265 | orchestrator | Thursday 19 March 2026 04:39:20 +0000 (0:00:01.327) 0:03:13.961 ******** 2026-03-19 04:39:22.307276 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.307287 | orchestrator | 2026-03-19 04:39:22.307297 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-19 04:39:22.307308 | orchestrator | Thursday 19 March 2026 04:39:20 +0000 (0:00:00.215) 0:03:14.177 ******** 2026-03-19 04:39:22.307336 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:22.307348 | orchestrator | 2026-03-19 04:39:22.307359 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-19 04:39:22.307370 | orchestrator | Thursday 19 March 2026 04:39:21 +0000 (0:00:00.142) 0:03:14.319 ******** 2026-03-19 04:39:22.307380 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-19 04:39:22.307391 | orchestrator | 2026-03-19 04:39:22.307402 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-19 04:39:22.307419 | orchestrator | Thursday 19 March 2026 04:39:22 +0000 (0:00:01.241) 0:03:15.560 ******** 2026-03-19 04:39:48.444194 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.444407 | orchestrator | 2026-03-19 04:39:48.444428 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-19 04:39:48.444441 | orchestrator | Thursday 19 March 2026 04:39:22 +0000 (0:00:00.140) 0:03:15.701 ******** 2026-03-19 04:39:48.444454 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:48.444465 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:39:48.444477 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:39:48.444489 | orchestrator | 2026-03-19 04:39:48.444500 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-19 04:39:48.444511 | orchestrator | Thursday 19 March 2026 04:39:23 +0000 (0:00:01.509) 0:03:17.211 ******** 2026-03-19 04:39:48.444522 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-19 04:39:48.444533 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-19 04:39:48.444545 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-19 04:39:48.444557 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-19 04:39:48.444569 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-19 04:39:48.444580 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-19 04:39:48.444591 | orchestrator | 2026-03-19 04:39:48.444602 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-19 04:39:48.444613 | orchestrator | Thursday 19 March 2026 04:39:36 +0000 (0:00:12.332) 0:03:29.544 ******** 2026-03-19 04:39:48.444656 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:48.444668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:39:48.444679 | orchestrator | 2026-03-19 04:39:48.444690 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-19 04:39:48.444701 | orchestrator | Thursday 19 March 2026 04:39:39 +0000 (0:00:02.966) 0:03:32.511 ******** 2026-03-19 04:39:48.444712 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:39:48.444722 | orchestrator | 2026-03-19 04:39:48.444733 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:39:48.444744 | orchestrator | Thursday 19 March 2026 04:39:40 +0000 (0:00:01.566) 0:03:34.078 ******** 2026-03-19 04:39:48.444773 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-19 04:39:48.444784 | orchestrator | 2026-03-19 04:39:48.444795 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:39:48.444806 | orchestrator | Thursday 19 March 2026 04:39:41 +0000 (0:00:00.554) 0:03:34.632 ******** 2026-03-19 04:39:48.444817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-19 04:39:48.444828 | orchestrator | 2026-03-19 04:39:48.444838 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:39:48.444849 | orchestrator | Thursday 19 March 2026 04:39:42 +0000 (0:00:00.802) 0:03:35.434 ******** 2026-03-19 04:39:48.444860 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.444871 | orchestrator | 2026-03-19 04:39:48.444882 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:39:48.444893 | orchestrator | Thursday 19 March 2026 04:39:42 +0000 (0:00:00.561) 0:03:35.996 ******** 2026-03-19 04:39:48.444904 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.444914 | orchestrator | 2026-03-19 04:39:48.444925 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:39:48.444936 | orchestrator | Thursday 19 March 2026 04:39:42 +0000 (0:00:00.136) 0:03:36.133 ******** 2026-03-19 04:39:48.444947 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.444957 | orchestrator | 2026-03-19 04:39:48.444968 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:39:48.444979 | orchestrator | Thursday 19 March 2026 04:39:43 +0000 (0:00:00.156) 0:03:36.290 ******** 2026-03-19 04:39:48.444990 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445002 | orchestrator | 2026-03-19 04:39:48.445021 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:39:48.445040 | orchestrator | Thursday 19 March 2026 04:39:43 +0000 (0:00:00.141) 0:03:36.432 ******** 2026-03-19 04:39:48.445058 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445075 | orchestrator | 2026-03-19 04:39:48.445095 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:39:48.445114 | orchestrator | Thursday 19 March 2026 04:39:43 +0000 (0:00:00.611) 0:03:37.043 ******** 2026-03-19 04:39:48.445134 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445153 | orchestrator | 2026-03-19 04:39:48.445164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:39:48.445175 | orchestrator | Thursday 19 March 2026 04:39:43 +0000 (0:00:00.131) 0:03:37.175 ******** 2026-03-19 04:39:48.445186 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445197 | orchestrator | 2026-03-19 04:39:48.445208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:39:48.445218 | orchestrator | Thursday 19 March 2026 04:39:44 +0000 (0:00:00.123) 0:03:37.298 ******** 2026-03-19 04:39:48.445229 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445240 | orchestrator | 2026-03-19 04:39:48.445251 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:39:48.445262 | orchestrator | Thursday 19 March 2026 04:39:44 +0000 (0:00:00.554) 0:03:37.853 ******** 2026-03-19 04:39:48.445273 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445341 | orchestrator | 2026-03-19 04:39:48.445374 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:39:48.445386 | orchestrator | Thursday 19 March 2026 04:39:45 +0000 (0:00:00.604) 0:03:38.457 ******** 2026-03-19 04:39:48.445397 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445408 | orchestrator | 2026-03-19 04:39:48.445419 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:39:48.445430 | orchestrator | Thursday 19 March 2026 04:39:45 +0000 (0:00:00.131) 0:03:38.588 ******** 2026-03-19 04:39:48.445441 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445452 | orchestrator | 2026-03-19 04:39:48.445463 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:39:48.445474 | orchestrator | Thursday 19 March 2026 04:39:45 +0000 (0:00:00.144) 0:03:38.733 ******** 2026-03-19 04:39:48.445485 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445496 | orchestrator | 2026-03-19 04:39:48.445507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:39:48.445518 | orchestrator | Thursday 19 March 2026 04:39:45 +0000 (0:00:00.129) 0:03:38.862 ******** 2026-03-19 04:39:48.445528 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445539 | orchestrator | 2026-03-19 04:39:48.445556 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:39:48.445581 | orchestrator | Thursday 19 March 2026 04:39:45 +0000 (0:00:00.125) 0:03:38.988 ******** 2026-03-19 04:39:48.445605 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445624 | orchestrator | 2026-03-19 04:39:48.445642 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:39:48.445660 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.355) 0:03:39.343 ******** 2026-03-19 04:39:48.445678 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445697 | orchestrator | 2026-03-19 04:39:48.445717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:39:48.445735 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.139) 0:03:39.482 ******** 2026-03-19 04:39:48.445760 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.445783 | orchestrator | 2026-03-19 04:39:48.445800 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:39:48.445818 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.136) 0:03:39.619 ******** 2026-03-19 04:39:48.445835 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445851 | orchestrator | 2026-03-19 04:39:48.445867 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:39:48.445883 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.144) 0:03:39.764 ******** 2026-03-19 04:39:48.445899 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445916 | orchestrator | 2026-03-19 04:39:48.445933 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:39:48.445950 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.168) 0:03:39.932 ******** 2026-03-19 04:39:48.445968 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:39:48.445985 | orchestrator | 2026-03-19 04:39:48.446156 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:39:48.446188 | orchestrator | Thursday 19 March 2026 04:39:46 +0000 (0:00:00.235) 0:03:40.167 ******** 2026-03-19 04:39:48.446200 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446211 | orchestrator | 2026-03-19 04:39:48.446222 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:39:48.446233 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.162) 0:03:40.330 ******** 2026-03-19 04:39:48.446243 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446254 | orchestrator | 2026-03-19 04:39:48.446264 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:39:48.446275 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.128) 0:03:40.458 ******** 2026-03-19 04:39:48.446329 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446368 | orchestrator | 2026-03-19 04:39:48.446388 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:39:48.446407 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.121) 0:03:40.580 ******** 2026-03-19 04:39:48.446426 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446438 | orchestrator | 2026-03-19 04:39:48.446449 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:39:48.446460 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.118) 0:03:40.698 ******** 2026-03-19 04:39:48.446471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446482 | orchestrator | 2026-03-19 04:39:48.446493 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:39:48.446504 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.124) 0:03:40.823 ******** 2026-03-19 04:39:48.446515 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446525 | orchestrator | 2026-03-19 04:39:48.446536 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:39:48.446547 | orchestrator | Thursday 19 March 2026 04:39:47 +0000 (0:00:00.126) 0:03:40.949 ******** 2026-03-19 04:39:48.446557 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446568 | orchestrator | 2026-03-19 04:39:48.446579 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:39:48.446591 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.352) 0:03:41.302 ******** 2026-03-19 04:39:48.446601 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446612 | orchestrator | 2026-03-19 04:39:48.446623 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:39:48.446634 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.128) 0:03:41.430 ******** 2026-03-19 04:39:48.446644 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446655 | orchestrator | 2026-03-19 04:39:48.446666 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:39:48.446677 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.139) 0:03:41.569 ******** 2026-03-19 04:39:48.446688 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:39:48.446698 | orchestrator | 2026-03-19 04:39:48.446709 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:39:48.446720 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.125) 0:03:41.695 ******** 2026-03-19 04:40:07.448290 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448353 | orchestrator | 2026-03-19 04:40:07.448359 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:40:07.448365 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.137) 0:03:41.832 ******** 2026-03-19 04:40:07.448370 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448374 | orchestrator | 2026-03-19 04:40:07.448379 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:40:07.448383 | orchestrator | Thursday 19 March 2026 04:39:48 +0000 (0:00:00.204) 0:03:42.037 ******** 2026-03-19 04:40:07.448388 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448393 | orchestrator | 2026-03-19 04:40:07.448397 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:40:07.448401 | orchestrator | Thursday 19 March 2026 04:39:49 +0000 (0:00:00.981) 0:03:43.018 ******** 2026-03-19 04:40:07.448406 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448410 | orchestrator | 2026-03-19 04:40:07.448414 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:40:07.448419 | orchestrator | Thursday 19 March 2026 04:39:51 +0000 (0:00:01.393) 0:03:44.412 ******** 2026-03-19 04:40:07.448423 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-19 04:40:07.448428 | orchestrator | 2026-03-19 04:40:07.448432 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:40:07.448437 | orchestrator | Thursday 19 March 2026 04:39:51 +0000 (0:00:00.595) 0:03:45.008 ******** 2026-03-19 04:40:07.448453 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448458 | orchestrator | 2026-03-19 04:40:07.448462 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:40:07.448467 | orchestrator | Thursday 19 March 2026 04:39:51 +0000 (0:00:00.116) 0:03:45.124 ******** 2026-03-19 04:40:07.448471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448475 | orchestrator | 2026-03-19 04:40:07.448480 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:40:07.448484 | orchestrator | Thursday 19 March 2026 04:39:51 +0000 (0:00:00.129) 0:03:45.254 ******** 2026-03-19 04:40:07.448488 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:40:07.448493 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:40:07.448498 | orchestrator | 2026-03-19 04:40:07.448502 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:40:07.448506 | orchestrator | Thursday 19 March 2026 04:39:53 +0000 (0:00:01.204) 0:03:46.458 ******** 2026-03-19 04:40:07.448511 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448515 | orchestrator | 2026-03-19 04:40:07.448519 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:40:07.448530 | orchestrator | Thursday 19 March 2026 04:39:53 +0000 (0:00:00.689) 0:03:47.148 ******** 2026-03-19 04:40:07.448535 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448539 | orchestrator | 2026-03-19 04:40:07.448543 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:40:07.448548 | orchestrator | Thursday 19 March 2026 04:39:54 +0000 (0:00:00.166) 0:03:47.314 ******** 2026-03-19 04:40:07.448552 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448556 | orchestrator | 2026-03-19 04:40:07.448561 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:40:07.448565 | orchestrator | Thursday 19 March 2026 04:39:54 +0000 (0:00:00.144) 0:03:47.460 ******** 2026-03-19 04:40:07.448569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448574 | orchestrator | 2026-03-19 04:40:07.448578 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:40:07.448582 | orchestrator | Thursday 19 March 2026 04:39:54 +0000 (0:00:00.135) 0:03:47.595 ******** 2026-03-19 04:40:07.448587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-19 04:40:07.448591 | orchestrator | 2026-03-19 04:40:07.448595 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:40:07.448600 | orchestrator | Thursday 19 March 2026 04:39:54 +0000 (0:00:00.578) 0:03:48.174 ******** 2026-03-19 04:40:07.448604 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448608 | orchestrator | 2026-03-19 04:40:07.448613 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:40:07.448617 | orchestrator | Thursday 19 March 2026 04:39:55 +0000 (0:00:00.775) 0:03:48.949 ******** 2026-03-19 04:40:07.448622 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:40:07.448626 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:40:07.448630 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:40:07.448637 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448645 | orchestrator | 2026-03-19 04:40:07.448651 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:40:07.448658 | orchestrator | Thursday 19 March 2026 04:39:55 +0000 (0:00:00.149) 0:03:49.099 ******** 2026-03-19 04:40:07.448665 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448672 | orchestrator | 2026-03-19 04:40:07.448680 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:40:07.448687 | orchestrator | Thursday 19 March 2026 04:39:55 +0000 (0:00:00.115) 0:03:49.215 ******** 2026-03-19 04:40:07.448700 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448707 | orchestrator | 2026-03-19 04:40:07.448714 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:40:07.448722 | orchestrator | Thursday 19 March 2026 04:39:56 +0000 (0:00:00.192) 0:03:49.407 ******** 2026-03-19 04:40:07.448729 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448736 | orchestrator | 2026-03-19 04:40:07.448741 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:40:07.448754 | orchestrator | Thursday 19 March 2026 04:39:56 +0000 (0:00:00.167) 0:03:49.574 ******** 2026-03-19 04:40:07.448759 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448763 | orchestrator | 2026-03-19 04:40:07.448768 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:40:07.448772 | orchestrator | Thursday 19 March 2026 04:39:56 +0000 (0:00:00.153) 0:03:49.728 ******** 2026-03-19 04:40:07.448776 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448781 | orchestrator | 2026-03-19 04:40:07.448785 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:40:07.448790 | orchestrator | Thursday 19 March 2026 04:39:56 +0000 (0:00:00.363) 0:03:50.091 ******** 2026-03-19 04:40:07.448794 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448798 | orchestrator | 2026-03-19 04:40:07.448802 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:40:07.448807 | orchestrator | Thursday 19 March 2026 04:39:58 +0000 (0:00:01.801) 0:03:51.892 ******** 2026-03-19 04:40:07.448811 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.448815 | orchestrator | 2026-03-19 04:40:07.448820 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:40:07.448824 | orchestrator | Thursday 19 March 2026 04:39:58 +0000 (0:00:00.147) 0:03:52.040 ******** 2026-03-19 04:40:07.448828 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-19 04:40:07.448833 | orchestrator | 2026-03-19 04:40:07.448837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:40:07.448841 | orchestrator | Thursday 19 March 2026 04:39:59 +0000 (0:00:00.599) 0:03:52.639 ******** 2026-03-19 04:40:07.448846 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448851 | orchestrator | 2026-03-19 04:40:07.448857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:40:07.448862 | orchestrator | Thursday 19 March 2026 04:39:59 +0000 (0:00:00.138) 0:03:52.778 ******** 2026-03-19 04:40:07.448867 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448872 | orchestrator | 2026-03-19 04:40:07.448877 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:40:07.448882 | orchestrator | Thursday 19 March 2026 04:39:59 +0000 (0:00:00.142) 0:03:52.921 ******** 2026-03-19 04:40:07.448887 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448892 | orchestrator | 2026-03-19 04:40:07.448897 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:40:07.448902 | orchestrator | Thursday 19 March 2026 04:39:59 +0000 (0:00:00.137) 0:03:53.058 ******** 2026-03-19 04:40:07.448907 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448911 | orchestrator | 2026-03-19 04:40:07.448916 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:40:07.448922 | orchestrator | Thursday 19 March 2026 04:39:59 +0000 (0:00:00.136) 0:03:53.195 ******** 2026-03-19 04:40:07.448926 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448931 | orchestrator | 2026-03-19 04:40:07.448939 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:40:07.448945 | orchestrator | Thursday 19 March 2026 04:40:00 +0000 (0:00:00.143) 0:03:53.338 ******** 2026-03-19 04:40:07.448949 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448954 | orchestrator | 2026-03-19 04:40:07.448959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:40:07.448964 | orchestrator | Thursday 19 March 2026 04:40:00 +0000 (0:00:00.139) 0:03:53.478 ******** 2026-03-19 04:40:07.448972 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448978 | orchestrator | 2026-03-19 04:40:07.448983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:40:07.448988 | orchestrator | Thursday 19 March 2026 04:40:00 +0000 (0:00:00.146) 0:03:53.624 ******** 2026-03-19 04:40:07.448993 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:07.448997 | orchestrator | 2026-03-19 04:40:07.449002 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:40:07.449007 | orchestrator | Thursday 19 March 2026 04:40:00 +0000 (0:00:00.147) 0:03:53.772 ******** 2026-03-19 04:40:07.449012 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:07.449017 | orchestrator | 2026-03-19 04:40:07.449022 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:40:07.449027 | orchestrator | Thursday 19 March 2026 04:40:00 +0000 (0:00:00.443) 0:03:54.215 ******** 2026-03-19 04:40:07.449032 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-19 04:40:07.449037 | orchestrator | 2026-03-19 04:40:07.449042 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:40:07.449047 | orchestrator | Thursday 19 March 2026 04:40:01 +0000 (0:00:00.542) 0:03:54.757 ******** 2026-03-19 04:40:07.449052 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-19 04:40:07.449057 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-19 04:40:07.449062 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-19 04:40:07.449067 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-19 04:40:07.449072 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-19 04:40:07.449077 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-19 04:40:07.449082 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-19 04:40:07.449087 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:40:07.449092 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:40:07.449097 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:40:07.449102 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:40:07.449107 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:40:07.449112 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:40:07.449117 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:40:07.449125 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-19 04:40:20.352548 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-19 04:40:20.352629 | orchestrator | 2026-03-19 04:40:20.352635 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:40:20.352641 | orchestrator | Thursday 19 March 2026 04:40:07 +0000 (0:00:05.931) 0:04:00.689 ******** 2026-03-19 04:40:20.352646 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352651 | orchestrator | 2026-03-19 04:40:20.352655 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:40:20.352660 | orchestrator | Thursday 19 March 2026 04:40:07 +0000 (0:00:00.124) 0:04:00.814 ******** 2026-03-19 04:40:20.352664 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352668 | orchestrator | 2026-03-19 04:40:20.352672 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:40:20.352676 | orchestrator | Thursday 19 March 2026 04:40:07 +0000 (0:00:00.132) 0:04:00.946 ******** 2026-03-19 04:40:20.352680 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352684 | orchestrator | 2026-03-19 04:40:20.352688 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:40:20.352693 | orchestrator | Thursday 19 March 2026 04:40:07 +0000 (0:00:00.152) 0:04:01.099 ******** 2026-03-19 04:40:20.352697 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352716 | orchestrator | 2026-03-19 04:40:20.352720 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:40:20.352724 | orchestrator | Thursday 19 March 2026 04:40:07 +0000 (0:00:00.154) 0:04:01.253 ******** 2026-03-19 04:40:20.352728 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352732 | orchestrator | 2026-03-19 04:40:20.352736 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:40:20.352740 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.146) 0:04:01.399 ******** 2026-03-19 04:40:20.352744 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352748 | orchestrator | 2026-03-19 04:40:20.352752 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:40:20.352757 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.127) 0:04:01.527 ******** 2026-03-19 04:40:20.352761 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352765 | orchestrator | 2026-03-19 04:40:20.352769 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:40:20.352773 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.137) 0:04:01.665 ******** 2026-03-19 04:40:20.352777 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352781 | orchestrator | 2026-03-19 04:40:20.352785 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:40:20.352789 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.113) 0:04:01.778 ******** 2026-03-19 04:40:20.352804 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352808 | orchestrator | 2026-03-19 04:40:20.352813 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:40:20.352822 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.131) 0:04:01.910 ******** 2026-03-19 04:40:20.352827 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352831 | orchestrator | 2026-03-19 04:40:20.352835 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:40:20.352839 | orchestrator | Thursday 19 March 2026 04:40:08 +0000 (0:00:00.342) 0:04:02.252 ******** 2026-03-19 04:40:20.352843 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352847 | orchestrator | 2026-03-19 04:40:20.352851 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:40:20.352855 | orchestrator | Thursday 19 March 2026 04:40:09 +0000 (0:00:00.132) 0:04:02.384 ******** 2026-03-19 04:40:20.352860 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352864 | orchestrator | 2026-03-19 04:40:20.352867 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:40:20.352871 | orchestrator | Thursday 19 March 2026 04:40:09 +0000 (0:00:00.186) 0:04:02.571 ******** 2026-03-19 04:40:20.352875 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352879 | orchestrator | 2026-03-19 04:40:20.352883 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:40:20.352887 | orchestrator | Thursday 19 March 2026 04:40:09 +0000 (0:00:00.226) 0:04:02.797 ******** 2026-03-19 04:40:20.352891 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352895 | orchestrator | 2026-03-19 04:40:20.352899 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:40:20.352903 | orchestrator | Thursday 19 March 2026 04:40:09 +0000 (0:00:00.120) 0:04:02.918 ******** 2026-03-19 04:40:20.352907 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352912 | orchestrator | 2026-03-19 04:40:20.352916 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:40:20.352919 | orchestrator | Thursday 19 March 2026 04:40:09 +0000 (0:00:00.233) 0:04:03.152 ******** 2026-03-19 04:40:20.352923 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352927 | orchestrator | 2026-03-19 04:40:20.352931 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:40:20.352935 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.131) 0:04:03.283 ******** 2026-03-19 04:40:20.352943 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352947 | orchestrator | 2026-03-19 04:40:20.352952 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:40:20.352957 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.128) 0:04:03.411 ******** 2026-03-19 04:40:20.352961 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352965 | orchestrator | 2026-03-19 04:40:20.352969 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:40:20.352973 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.136) 0:04:03.547 ******** 2026-03-19 04:40:20.352977 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.352981 | orchestrator | 2026-03-19 04:40:20.352995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:40:20.352999 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.129) 0:04:03.677 ******** 2026-03-19 04:40:20.353003 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353007 | orchestrator | 2026-03-19 04:40:20.353011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:40:20.353015 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.127) 0:04:03.804 ******** 2026-03-19 04:40:20.353019 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353023 | orchestrator | 2026-03-19 04:40:20.353027 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:40:20.353031 | orchestrator | Thursday 19 March 2026 04:40:10 +0000 (0:00:00.130) 0:04:03.935 ******** 2026-03-19 04:40:20.353035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:40:20.353040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:40:20.353044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:40:20.353047 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353051 | orchestrator | 2026-03-19 04:40:20.353055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:40:20.353059 | orchestrator | Thursday 19 March 2026 04:40:11 +0000 (0:00:00.682) 0:04:04.617 ******** 2026-03-19 04:40:20.353063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:40:20.353067 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:40:20.353071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:40:20.353075 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353079 | orchestrator | 2026-03-19 04:40:20.353083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:40:20.353087 | orchestrator | Thursday 19 March 2026 04:40:12 +0000 (0:00:00.929) 0:04:05.547 ******** 2026-03-19 04:40:20.353091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:40:20.353095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:40:20.353099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:40:20.353103 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353107 | orchestrator | 2026-03-19 04:40:20.353111 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:40:20.353115 | orchestrator | Thursday 19 March 2026 04:40:12 +0000 (0:00:00.418) 0:04:05.965 ******** 2026-03-19 04:40:20.353119 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353124 | orchestrator | 2026-03-19 04:40:20.353128 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:40:20.353133 | orchestrator | Thursday 19 March 2026 04:40:12 +0000 (0:00:00.137) 0:04:06.102 ******** 2026-03-19 04:40:20.353141 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-19 04:40:20.353146 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353150 | orchestrator | 2026-03-19 04:40:20.353155 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:40:20.353163 | orchestrator | Thursday 19 March 2026 04:40:13 +0000 (0:00:00.620) 0:04:06.722 ******** 2026-03-19 04:40:20.353168 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353172 | orchestrator | 2026-03-19 04:40:20.353177 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:40:20.353181 | orchestrator | Thursday 19 March 2026 04:40:14 +0000 (0:00:00.851) 0:04:07.574 ******** 2026-03-19 04:40:20.353186 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353190 | orchestrator | 2026-03-19 04:40:20.353195 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-19 04:40:20.353199 | orchestrator | Thursday 19 March 2026 04:40:14 +0000 (0:00:00.154) 0:04:07.728 ******** 2026-03-19 04:40:20.353204 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-19 04:40:20.353209 | orchestrator | 2026-03-19 04:40:20.353214 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-19 04:40:20.353218 | orchestrator | Thursday 19 March 2026 04:40:15 +0000 (0:00:00.600) 0:04:08.329 ******** 2026-03-19 04:40:20.353222 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-19 04:40:20.353240 | orchestrator | 2026-03-19 04:40:20.353247 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-19 04:40:20.353254 | orchestrator | Thursday 19 March 2026 04:40:17 +0000 (0:00:02.287) 0:04:10.617 ******** 2026-03-19 04:40:20.353261 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:20.353268 | orchestrator | 2026-03-19 04:40:20.353274 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-19 04:40:20.353281 | orchestrator | Thursday 19 March 2026 04:40:17 +0000 (0:00:00.182) 0:04:10.800 ******** 2026-03-19 04:40:20.353286 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353290 | orchestrator | 2026-03-19 04:40:20.353295 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-19 04:40:20.353300 | orchestrator | Thursday 19 March 2026 04:40:17 +0000 (0:00:00.158) 0:04:10.959 ******** 2026-03-19 04:40:20.353304 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353308 | orchestrator | 2026-03-19 04:40:20.353313 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-19 04:40:20.353317 | orchestrator | Thursday 19 March 2026 04:40:18 +0000 (0:00:00.410) 0:04:11.369 ******** 2026-03-19 04:40:20.353322 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:40:20.353327 | orchestrator | 2026-03-19 04:40:20.353333 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-19 04:40:20.353340 | orchestrator | Thursday 19 March 2026 04:40:19 +0000 (0:00:01.107) 0:04:12.476 ******** 2026-03-19 04:40:20.353346 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353352 | orchestrator | 2026-03-19 04:40:20.353358 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-19 04:40:20.353365 | orchestrator | Thursday 19 March 2026 04:40:19 +0000 (0:00:00.597) 0:04:13.074 ******** 2026-03-19 04:40:20.353371 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:20.353377 | orchestrator | 2026-03-19 04:40:20.353387 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-19 04:40:53.589515 | orchestrator | Thursday 19 March 2026 04:40:20 +0000 (0:00:00.531) 0:04:13.606 ******** 2026-03-19 04:40:53.589614 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589626 | orchestrator | 2026-03-19 04:40:53.589635 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-19 04:40:53.589643 | orchestrator | Thursday 19 March 2026 04:40:20 +0000 (0:00:00.466) 0:04:14.072 ******** 2026-03-19 04:40:53.589651 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589658 | orchestrator | 2026-03-19 04:40:53.589665 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-19 04:40:53.589673 | orchestrator | Thursday 19 March 2026 04:40:21 +0000 (0:00:00.709) 0:04:14.781 ******** 2026-03-19 04:40:53.589680 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589687 | orchestrator | 2026-03-19 04:40:53.589694 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-19 04:40:53.589722 | orchestrator | Thursday 19 March 2026 04:40:22 +0000 (0:00:00.741) 0:04:15.523 ******** 2026-03-19 04:40:53.589730 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 04:40:53.589738 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 04:40:53.589745 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 04:40:53.589753 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-19 04:40:53.589760 | orchestrator | 2026-03-19 04:40:53.589767 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-19 04:40:53.589775 | orchestrator | Thursday 19 March 2026 04:40:25 +0000 (0:00:03.069) 0:04:18.592 ******** 2026-03-19 04:40:53.589782 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:40:53.589789 | orchestrator | 2026-03-19 04:40:53.589796 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-19 04:40:53.589803 | orchestrator | Thursday 19 March 2026 04:40:26 +0000 (0:00:01.057) 0:04:19.649 ******** 2026-03-19 04:40:53.589811 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589818 | orchestrator | 2026-03-19 04:40:53.589826 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-19 04:40:53.589833 | orchestrator | Thursday 19 March 2026 04:40:26 +0000 (0:00:00.143) 0:04:19.793 ******** 2026-03-19 04:40:53.589840 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589847 | orchestrator | 2026-03-19 04:40:53.589854 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-19 04:40:53.589861 | orchestrator | Thursday 19 March 2026 04:40:26 +0000 (0:00:00.135) 0:04:19.928 ******** 2026-03-19 04:40:53.589869 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589876 | orchestrator | 2026-03-19 04:40:53.589883 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-19 04:40:53.589890 | orchestrator | Thursday 19 March 2026 04:40:27 +0000 (0:00:01.005) 0:04:20.934 ******** 2026-03-19 04:40:53.589897 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.589904 | orchestrator | 2026-03-19 04:40:53.589923 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-19 04:40:53.589931 | orchestrator | Thursday 19 March 2026 04:40:28 +0000 (0:00:00.505) 0:04:21.439 ******** 2026-03-19 04:40:53.589938 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:53.589945 | orchestrator | 2026-03-19 04:40:53.589952 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-19 04:40:53.589959 | orchestrator | Thursday 19 March 2026 04:40:28 +0000 (0:00:00.387) 0:04:21.827 ******** 2026-03-19 04:40:53.589966 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-19 04:40:53.589974 | orchestrator | 2026-03-19 04:40:53.589982 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-19 04:40:53.589989 | orchestrator | Thursday 19 March 2026 04:40:29 +0000 (0:00:00.566) 0:04:22.394 ******** 2026-03-19 04:40:53.589996 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:53.590003 | orchestrator | 2026-03-19 04:40:53.590010 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-19 04:40:53.590063 | orchestrator | Thursday 19 March 2026 04:40:29 +0000 (0:00:00.138) 0:04:22.532 ******** 2026-03-19 04:40:53.590072 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:53.590079 | orchestrator | 2026-03-19 04:40:53.590086 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-19 04:40:53.590093 | orchestrator | Thursday 19 March 2026 04:40:29 +0000 (0:00:00.119) 0:04:22.652 ******** 2026-03-19 04:40:53.590122 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-19 04:40:53.590130 | orchestrator | 2026-03-19 04:40:53.590138 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-19 04:40:53.590146 | orchestrator | Thursday 19 March 2026 04:40:29 +0000 (0:00:00.587) 0:04:23.240 ******** 2026-03-19 04:40:53.590158 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.590197 | orchestrator | 2026-03-19 04:40:53.590210 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-19 04:40:53.590221 | orchestrator | Thursday 19 March 2026 04:40:31 +0000 (0:00:01.326) 0:04:24.566 ******** 2026-03-19 04:40:53.590232 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.590244 | orchestrator | 2026-03-19 04:40:53.590255 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-19 04:40:53.590267 | orchestrator | Thursday 19 March 2026 04:40:32 +0000 (0:00:00.994) 0:04:25.560 ******** 2026-03-19 04:40:53.590279 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.590290 | orchestrator | 2026-03-19 04:40:53.590301 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-19 04:40:53.590314 | orchestrator | Thursday 19 March 2026 04:40:33 +0000 (0:00:01.430) 0:04:26.991 ******** 2026-03-19 04:40:53.590326 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:40:53.590338 | orchestrator | 2026-03-19 04:40:53.590349 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-19 04:40:53.590360 | orchestrator | Thursday 19 March 2026 04:40:36 +0000 (0:00:02.316) 0:04:29.307 ******** 2026-03-19 04:40:53.590373 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-19 04:40:53.590385 | orchestrator | 2026-03-19 04:40:53.590418 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-19 04:40:53.590432 | orchestrator | Thursday 19 March 2026 04:40:36 +0000 (0:00:00.584) 0:04:29.892 ******** 2026-03-19 04:40:53.590444 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.590457 | orchestrator | 2026-03-19 04:40:53.590469 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-19 04:40:53.590481 | orchestrator | Thursday 19 March 2026 04:40:38 +0000 (0:00:01.616) 0:04:31.508 ******** 2026-03-19 04:40:53.590492 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:40:53.590500 | orchestrator | 2026-03-19 04:40:53.590507 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-19 04:40:53.590514 | orchestrator | Thursday 19 March 2026 04:40:40 +0000 (0:00:02.173) 0:04:33.682 ******** 2026-03-19 04:40:53.590521 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:53.590528 | orchestrator | 2026-03-19 04:40:53.590535 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-19 04:40:53.590611 | orchestrator | Thursday 19 March 2026 04:40:40 +0000 (0:00:00.130) 0:04:33.813 ******** 2026-03-19 04:40:53.590622 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:40:53.590632 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:40:53.590640 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 04:40:53.590656 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 04:40:53.590674 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 04:40:53.590683 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}])  2026-03-19 04:40:53.590692 | orchestrator | 2026-03-19 04:40:53.590700 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-19 04:40:53.590707 | orchestrator | Thursday 19 March 2026 04:40:50 +0000 (0:00:10.017) 0:04:43.831 ******** 2026-03-19 04:40:53.590715 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:40:53.590722 | orchestrator | 2026-03-19 04:40:53.590729 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:40:53.590736 | orchestrator | Thursday 19 March 2026 04:40:52 +0000 (0:00:01.473) 0:04:45.304 ******** 2026-03-19 04:40:53.590744 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:40:53.590752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:40:53.590759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:40:53.590766 | orchestrator | 2026-03-19 04:40:53.590773 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:40:53.590781 | orchestrator | Thursday 19 March 2026 04:40:53 +0000 (0:00:01.070) 0:04:46.375 ******** 2026-03-19 04:40:53.590788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:40:53.590797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:40:53.590805 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:40:53.590814 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:40:53.590823 | orchestrator | 2026-03-19 04:40:53.590831 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-19 04:40:53.590848 | orchestrator | Thursday 19 March 2026 04:40:53 +0000 (0:00:00.461) 0:04:46.837 ******** 2026-03-19 04:41:04.506695 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:41:04.506804 | orchestrator | 2026-03-19 04:41:04.506813 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-19 04:41:04.506819 | orchestrator | Thursday 19 March 2026 04:40:53 +0000 (0:00:00.126) 0:04:46.964 ******** 2026-03-19 04:41:04.506824 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:41:04.506829 | orchestrator | 2026-03-19 04:41:04.506833 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-19 04:41:04.506838 | orchestrator | 2026-03-19 04:41:04.506842 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-19 04:41:04.506846 | orchestrator | Thursday 19 March 2026 04:40:56 +0000 (0:00:02.828) 0:04:49.792 ******** 2026-03-19 04:41:04.506851 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.506855 | orchestrator | 2026-03-19 04:41:04.506859 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-19 04:41:04.506877 | orchestrator | Thursday 19 March 2026 04:40:57 +0000 (0:00:00.538) 0:04:50.330 ******** 2026-03-19 04:41:04.506882 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.506886 | orchestrator | 2026-03-19 04:41:04.506890 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-19 04:41:04.506894 | orchestrator | Thursday 19 March 2026 04:40:57 +0000 (0:00:00.408) 0:04:50.739 ******** 2026-03-19 04:41:04.506898 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:04.506903 | orchestrator | 2026-03-19 04:41:04.506907 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-19 04:41:04.506928 | orchestrator | Thursday 19 March 2026 04:40:57 +0000 (0:00:00.115) 0:04:50.854 ******** 2026-03-19 04:41:04.506969 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.506975 | orchestrator | 2026-03-19 04:41:04.506980 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:41:04.506985 | orchestrator | Thursday 19 March 2026 04:40:57 +0000 (0:00:00.156) 0:04:51.010 ******** 2026-03-19 04:41:04.506989 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-19 04:41:04.506994 | orchestrator | 2026-03-19 04:41:04.506998 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:41:04.507002 | orchestrator | Thursday 19 March 2026 04:40:58 +0000 (0:00:00.261) 0:04:51.272 ******** 2026-03-19 04:41:04.507006 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507010 | orchestrator | 2026-03-19 04:41:04.507014 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:41:04.507018 | orchestrator | Thursday 19 March 2026 04:40:58 +0000 (0:00:00.500) 0:04:51.772 ******** 2026-03-19 04:41:04.507022 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507026 | orchestrator | 2026-03-19 04:41:04.507030 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:41:04.507034 | orchestrator | Thursday 19 March 2026 04:40:58 +0000 (0:00:00.140) 0:04:51.913 ******** 2026-03-19 04:41:04.507054 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507058 | orchestrator | 2026-03-19 04:41:04.507062 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:41:04.507073 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.477) 0:04:52.391 ******** 2026-03-19 04:41:04.507077 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507081 | orchestrator | 2026-03-19 04:41:04.507085 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:41:04.507089 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.142) 0:04:52.533 ******** 2026-03-19 04:41:04.507093 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507097 | orchestrator | 2026-03-19 04:41:04.507101 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:41:04.507105 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.144) 0:04:52.678 ******** 2026-03-19 04:41:04.507109 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507112 | orchestrator | 2026-03-19 04:41:04.507116 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:41:04.507120 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.146) 0:04:52.824 ******** 2026-03-19 04:41:04.507124 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:04.507128 | orchestrator | 2026-03-19 04:41:04.507132 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:41:04.507136 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.166) 0:04:52.990 ******** 2026-03-19 04:41:04.507140 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507144 | orchestrator | 2026-03-19 04:41:04.507148 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:41:04.507152 | orchestrator | Thursday 19 March 2026 04:40:59 +0000 (0:00:00.120) 0:04:53.111 ******** 2026-03-19 04:41:04.507156 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:41:04.507160 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:04.507184 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:41:04.507188 | orchestrator | 2026-03-19 04:41:04.507192 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:41:04.507196 | orchestrator | Thursday 19 March 2026 04:41:00 +0000 (0:00:01.134) 0:04:54.245 ******** 2026-03-19 04:41:04.507200 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:04.507204 | orchestrator | 2026-03-19 04:41:04.507208 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:41:04.507217 | orchestrator | Thursday 19 March 2026 04:41:01 +0000 (0:00:00.242) 0:04:54.488 ******** 2026-03-19 04:41:04.507221 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:41:04.507225 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:04.507229 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:41:04.507233 | orchestrator | 2026-03-19 04:41:04.507237 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:41:04.507242 | orchestrator | Thursday 19 March 2026 04:41:03 +0000 (0:00:01.880) 0:04:56.368 ******** 2026-03-19 04:41:04.507257 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:41:04.507263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:41:04.507268 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:41:04.507273 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:04.507277 | orchestrator | 2026-03-19 04:41:04.507282 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:41:04.507287 | orchestrator | Thursday 19 March 2026 04:41:03 +0000 (0:00:00.408) 0:04:56.777 ******** 2026-03-19 04:41:04.507293 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507301 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507310 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:04.507315 | orchestrator | 2026-03-19 04:41:04.507319 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:41:04.507324 | orchestrator | Thursday 19 March 2026 04:41:04 +0000 (0:00:00.619) 0:04:57.397 ******** 2026-03-19 04:41:04.507330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507341 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:04.507351 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:04.507355 | orchestrator | 2026-03-19 04:41:04.507360 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:41:04.507365 | orchestrator | Thursday 19 March 2026 04:41:04 +0000 (0:00:00.165) 0:04:57.562 ******** 2026-03-19 04:41:04.507375 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:41:01.777025', 'end': '2026-03-19 04:41:01.822342', 'delta': '0:00:00.045317', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:41:04.507438 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '7d1c29d08d66', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:41:02.367690', 'end': '2026-03-19 04:41:02.404082', 'delta': '0:00:00.036392', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d1c29d08d66'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:41:08.119573 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '115813b5cae5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:41:02.893610', 'end': '2026-03-19 04:41:02.952925', 'delta': '0:00:00.059315', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['115813b5cae5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:41:08.119679 | orchestrator | 2026-03-19 04:41:08.119695 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:41:08.119707 | orchestrator | Thursday 19 March 2026 04:41:04 +0000 (0:00:00.197) 0:04:57.759 ******** 2026-03-19 04:41:08.119719 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:08.119731 | orchestrator | 2026-03-19 04:41:08.119742 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:41:08.119754 | orchestrator | Thursday 19 March 2026 04:41:04 +0000 (0:00:00.251) 0:04:58.010 ******** 2026-03-19 04:41:08.119765 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.119776 | orchestrator | 2026-03-19 04:41:08.119787 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:41:08.119798 | orchestrator | Thursday 19 March 2026 04:41:04 +0000 (0:00:00.244) 0:04:58.255 ******** 2026-03-19 04:41:08.119809 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:08.119820 | orchestrator | 2026-03-19 04:41:08.119831 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:41:08.119842 | orchestrator | Thursday 19 March 2026 04:41:05 +0000 (0:00:00.148) 0:04:58.403 ******** 2026-03-19 04:41:08.119853 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:41:08.119863 | orchestrator | 2026-03-19 04:41:08.119874 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:41:08.119902 | orchestrator | Thursday 19 March 2026 04:41:06 +0000 (0:00:01.043) 0:04:59.447 ******** 2026-03-19 04:41:08.119913 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:08.119924 | orchestrator | 2026-03-19 04:41:08.119935 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:41:08.119968 | orchestrator | Thursday 19 March 2026 04:41:06 +0000 (0:00:00.139) 0:04:59.586 ******** 2026-03-19 04:41:08.119980 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.119991 | orchestrator | 2026-03-19 04:41:08.120002 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:41:08.120013 | orchestrator | Thursday 19 March 2026 04:41:06 +0000 (0:00:00.127) 0:04:59.714 ******** 2026-03-19 04:41:08.120024 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120034 | orchestrator | 2026-03-19 04:41:08.120045 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:41:08.120056 | orchestrator | Thursday 19 March 2026 04:41:06 +0000 (0:00:00.220) 0:04:59.934 ******** 2026-03-19 04:41:08.120067 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120078 | orchestrator | 2026-03-19 04:41:08.120091 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:41:08.120105 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.375) 0:05:00.310 ******** 2026-03-19 04:41:08.120117 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120130 | orchestrator | 2026-03-19 04:41:08.120143 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:41:08.120156 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.141) 0:05:00.452 ******** 2026-03-19 04:41:08.120220 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120238 | orchestrator | 2026-03-19 04:41:08.120259 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:41:08.120279 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.129) 0:05:00.581 ******** 2026-03-19 04:41:08.120298 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120316 | orchestrator | 2026-03-19 04:41:08.120329 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:41:08.120342 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.136) 0:05:00.718 ******** 2026-03-19 04:41:08.120355 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120368 | orchestrator | 2026-03-19 04:41:08.120380 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:41:08.120393 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.124) 0:05:00.842 ******** 2026-03-19 04:41:08.120406 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120419 | orchestrator | 2026-03-19 04:41:08.120431 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:41:08.120445 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.138) 0:05:00.980 ******** 2026-03-19 04:41:08.120457 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.120469 | orchestrator | 2026-03-19 04:41:08.120480 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:41:08.120491 | orchestrator | Thursday 19 March 2026 04:41:07 +0000 (0:00:00.127) 0:05:01.108 ******** 2026-03-19 04:41:08.120520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:41:08.120589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.120637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:41:08.349297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.349403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:41:08.349420 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:08.349435 | orchestrator | 2026-03-19 04:41:08.349464 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:41:08.349476 | orchestrator | Thursday 19 March 2026 04:41:08 +0000 (0:00:00.265) 0:05:01.374 ******** 2026-03-19 04:41:08.349490 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349505 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349517 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349529 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349583 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349596 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349608 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349665 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:08.349714 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:21.900058 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:41:21.900227 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900251 | orchestrator | 2026-03-19 04:41:21.900273 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:41:21.900301 | orchestrator | Thursday 19 March 2026 04:41:08 +0000 (0:00:00.224) 0:05:01.599 ******** 2026-03-19 04:41:21.900315 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:21.900330 | orchestrator | 2026-03-19 04:41:21.900345 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:41:21.900361 | orchestrator | Thursday 19 March 2026 04:41:08 +0000 (0:00:00.502) 0:05:02.101 ******** 2026-03-19 04:41:21.900377 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:21.900391 | orchestrator | 2026-03-19 04:41:21.900405 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:41:21.900419 | orchestrator | Thursday 19 March 2026 04:41:08 +0000 (0:00:00.140) 0:05:02.241 ******** 2026-03-19 04:41:21.900434 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:21.900450 | orchestrator | 2026-03-19 04:41:21.900465 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:41:21.900481 | orchestrator | Thursday 19 March 2026 04:41:09 +0000 (0:00:00.499) 0:05:02.741 ******** 2026-03-19 04:41:21.900498 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900513 | orchestrator | 2026-03-19 04:41:21.900523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:41:21.900533 | orchestrator | Thursday 19 March 2026 04:41:09 +0000 (0:00:00.130) 0:05:02.872 ******** 2026-03-19 04:41:21.900543 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900553 | orchestrator | 2026-03-19 04:41:21.900563 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:41:21.900572 | orchestrator | Thursday 19 March 2026 04:41:10 +0000 (0:00:00.830) 0:05:03.702 ******** 2026-03-19 04:41:21.900584 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900595 | orchestrator | 2026-03-19 04:41:21.900606 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:41:21.900618 | orchestrator | Thursday 19 March 2026 04:41:10 +0000 (0:00:00.156) 0:05:03.858 ******** 2026-03-19 04:41:21.900629 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-19 04:41:21.900640 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:21.900651 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-19 04:41:21.900683 | orchestrator | 2026-03-19 04:41:21.900694 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:41:21.900706 | orchestrator | Thursday 19 March 2026 04:41:11 +0000 (0:00:00.649) 0:05:04.508 ******** 2026-03-19 04:41:21.900717 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:41:21.900729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:41:21.900741 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:41:21.900752 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900763 | orchestrator | 2026-03-19 04:41:21.900775 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:41:21.900786 | orchestrator | Thursday 19 March 2026 04:41:11 +0000 (0:00:00.163) 0:05:04.672 ******** 2026-03-19 04:41:21.900797 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.900808 | orchestrator | 2026-03-19 04:41:21.900818 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:41:21.900830 | orchestrator | Thursday 19 March 2026 04:41:11 +0000 (0:00:00.125) 0:05:04.797 ******** 2026-03-19 04:41:21.900841 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:41:21.900853 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:21.900864 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:41:21.900875 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:41:21.900886 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:41:21.900897 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:41:21.900908 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:41:21.900919 | orchestrator | 2026-03-19 04:41:21.900931 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:41:21.900942 | orchestrator | Thursday 19 March 2026 04:41:12 +0000 (0:00:00.790) 0:05:05.588 ******** 2026-03-19 04:41:21.900952 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:41:21.900961 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:21.900971 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:41:21.900981 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:41:21.901008 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:41:21.901019 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:41:21.901029 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:41:21.901038 | orchestrator | 2026-03-19 04:41:21.901048 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-19 04:41:21.901058 | orchestrator | Thursday 19 March 2026 04:41:13 +0000 (0:00:01.533) 0:05:07.121 ******** 2026-03-19 04:41:21.901067 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901077 | orchestrator | 2026-03-19 04:41:21.901086 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-19 04:41:21.901096 | orchestrator | Thursday 19 March 2026 04:41:14 +0000 (0:00:00.224) 0:05:07.345 ******** 2026-03-19 04:41:21.901105 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901115 | orchestrator | 2026-03-19 04:41:21.901133 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-19 04:41:21.901166 | orchestrator | Thursday 19 March 2026 04:41:14 +0000 (0:00:00.228) 0:05:07.574 ******** 2026-03-19 04:41:21.901177 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901187 | orchestrator | 2026-03-19 04:41:21.901196 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-19 04:41:21.901214 | orchestrator | Thursday 19 March 2026 04:41:14 +0000 (0:00:00.127) 0:05:07.701 ******** 2026-03-19 04:41:21.901224 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901233 | orchestrator | 2026-03-19 04:41:21.901243 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-19 04:41:21.901253 | orchestrator | Thursday 19 March 2026 04:41:14 +0000 (0:00:00.223) 0:05:07.925 ******** 2026-03-19 04:41:21.901262 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901272 | orchestrator | 2026-03-19 04:41:21.901282 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-19 04:41:21.901291 | orchestrator | Thursday 19 March 2026 04:41:14 +0000 (0:00:00.117) 0:05:08.042 ******** 2026-03-19 04:41:21.901301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:41:21.901311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:41:21.901320 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:41:21.901330 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901340 | orchestrator | 2026-03-19 04:41:21.901349 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-19 04:41:21.901359 | orchestrator | Thursday 19 March 2026 04:41:15 +0000 (0:00:00.958) 0:05:09.001 ******** 2026-03-19 04:41:21.901369 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-19 04:41:21.901379 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-19 04:41:21.901389 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-19 04:41:21.901398 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-19 04:41:21.901408 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-19 04:41:21.901418 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-19 04:41:21.901427 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901437 | orchestrator | 2026-03-19 04:41:21.901446 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-19 04:41:21.901456 | orchestrator | Thursday 19 March 2026 04:41:16 +0000 (0:00:00.616) 0:05:09.617 ******** 2026-03-19 04:41:21.901466 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:21.901475 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:41:21.901485 | orchestrator | 2026-03-19 04:41:21.901494 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-19 04:41:21.901504 | orchestrator | Thursday 19 March 2026 04:41:19 +0000 (0:00:02.751) 0:05:12.369 ******** 2026-03-19 04:41:21.901514 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:41:21.901523 | orchestrator | 2026-03-19 04:41:21.901533 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:41:21.901542 | orchestrator | Thursday 19 March 2026 04:41:20 +0000 (0:00:01.539) 0:05:13.909 ******** 2026-03-19 04:41:21.901552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-19 04:41:21.901563 | orchestrator | 2026-03-19 04:41:21.901573 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:41:21.901582 | orchestrator | Thursday 19 March 2026 04:41:20 +0000 (0:00:00.212) 0:05:14.121 ******** 2026-03-19 04:41:21.901592 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-19 04:41:21.901601 | orchestrator | 2026-03-19 04:41:21.901622 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:41:21.901632 | orchestrator | Thursday 19 March 2026 04:41:21 +0000 (0:00:00.201) 0:05:14.323 ******** 2026-03-19 04:41:21.901642 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:21.901652 | orchestrator | 2026-03-19 04:41:21.901662 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:41:21.901677 | orchestrator | Thursday 19 March 2026 04:41:21 +0000 (0:00:00.555) 0:05:14.879 ******** 2026-03-19 04:41:21.901687 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901697 | orchestrator | 2026-03-19 04:41:21.901706 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:41:21.901716 | orchestrator | Thursday 19 March 2026 04:41:21 +0000 (0:00:00.145) 0:05:15.025 ******** 2026-03-19 04:41:21.901726 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:21.901735 | orchestrator | 2026-03-19 04:41:21.901749 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:41:21.901775 | orchestrator | Thursday 19 March 2026 04:41:21 +0000 (0:00:00.121) 0:05:15.146 ******** 2026-03-19 04:41:33.628990 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629174 | orchestrator | 2026-03-19 04:41:33.629206 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:41:33.629228 | orchestrator | Thursday 19 March 2026 04:41:22 +0000 (0:00:00.163) 0:05:15.310 ******** 2026-03-19 04:41:33.629248 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.629268 | orchestrator | 2026-03-19 04:41:33.629286 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:41:33.629307 | orchestrator | Thursday 19 March 2026 04:41:22 +0000 (0:00:00.533) 0:05:15.844 ******** 2026-03-19 04:41:33.629326 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629345 | orchestrator | 2026-03-19 04:41:33.629365 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:41:33.629377 | orchestrator | Thursday 19 March 2026 04:41:22 +0000 (0:00:00.355) 0:05:16.199 ******** 2026-03-19 04:41:33.629406 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629418 | orchestrator | 2026-03-19 04:41:33.629429 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:41:33.629440 | orchestrator | Thursday 19 March 2026 04:41:23 +0000 (0:00:00.178) 0:05:16.377 ******** 2026-03-19 04:41:33.629451 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.629462 | orchestrator | 2026-03-19 04:41:33.629473 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:41:33.629484 | orchestrator | Thursday 19 March 2026 04:41:23 +0000 (0:00:00.533) 0:05:16.911 ******** 2026-03-19 04:41:33.629495 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.629505 | orchestrator | 2026-03-19 04:41:33.629516 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:41:33.629527 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.559) 0:05:17.471 ******** 2026-03-19 04:41:33.629538 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629549 | orchestrator | 2026-03-19 04:41:33.629560 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:41:33.629571 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.145) 0:05:17.616 ******** 2026-03-19 04:41:33.629581 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.629592 | orchestrator | 2026-03-19 04:41:33.629603 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:41:33.629614 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.153) 0:05:17.770 ******** 2026-03-19 04:41:33.629625 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629635 | orchestrator | 2026-03-19 04:41:33.629646 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:41:33.629657 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.137) 0:05:17.907 ******** 2026-03-19 04:41:33.629668 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629679 | orchestrator | 2026-03-19 04:41:33.629690 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:41:33.629700 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.138) 0:05:18.045 ******** 2026-03-19 04:41:33.629711 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629722 | orchestrator | 2026-03-19 04:41:33.629733 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:41:33.629775 | orchestrator | Thursday 19 March 2026 04:41:24 +0000 (0:00:00.125) 0:05:18.171 ******** 2026-03-19 04:41:33.629797 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629816 | orchestrator | 2026-03-19 04:41:33.629834 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:41:33.629852 | orchestrator | Thursday 19 March 2026 04:41:25 +0000 (0:00:00.121) 0:05:18.292 ******** 2026-03-19 04:41:33.629871 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.629890 | orchestrator | 2026-03-19 04:41:33.629911 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:41:33.629930 | orchestrator | Thursday 19 March 2026 04:41:25 +0000 (0:00:00.125) 0:05:18.418 ******** 2026-03-19 04:41:33.629949 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.629972 | orchestrator | 2026-03-19 04:41:33.629990 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:41:33.630010 | orchestrator | Thursday 19 March 2026 04:41:25 +0000 (0:00:00.159) 0:05:18.578 ******** 2026-03-19 04:41:33.630114 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.630168 | orchestrator | 2026-03-19 04:41:33.630187 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:41:33.630208 | orchestrator | Thursday 19 March 2026 04:41:25 +0000 (0:00:00.152) 0:05:18.731 ******** 2026-03-19 04:41:33.630220 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.630230 | orchestrator | 2026-03-19 04:41:33.630241 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:41:33.630289 | orchestrator | Thursday 19 March 2026 04:41:25 +0000 (0:00:00.452) 0:05:19.184 ******** 2026-03-19 04:41:33.630302 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630313 | orchestrator | 2026-03-19 04:41:33.630324 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:41:33.630337 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.137) 0:05:19.321 ******** 2026-03-19 04:41:33.630355 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630370 | orchestrator | 2026-03-19 04:41:33.630385 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:41:33.630407 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.122) 0:05:19.443 ******** 2026-03-19 04:41:33.630432 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630451 | orchestrator | 2026-03-19 04:41:33.630469 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:41:33.630485 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.127) 0:05:19.570 ******** 2026-03-19 04:41:33.630503 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630520 | orchestrator | 2026-03-19 04:41:33.630536 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:41:33.630554 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.122) 0:05:19.693 ******** 2026-03-19 04:41:33.630572 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630591 | orchestrator | 2026-03-19 04:41:33.630639 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:41:33.630655 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.128) 0:05:19.822 ******** 2026-03-19 04:41:33.630666 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630676 | orchestrator | 2026-03-19 04:41:33.630687 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:41:33.630698 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.114) 0:05:19.937 ******** 2026-03-19 04:41:33.630708 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630719 | orchestrator | 2026-03-19 04:41:33.630730 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:41:33.630742 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.123) 0:05:20.061 ******** 2026-03-19 04:41:33.630752 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630763 | orchestrator | 2026-03-19 04:41:33.630784 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:41:33.630809 | orchestrator | Thursday 19 March 2026 04:41:26 +0000 (0:00:00.120) 0:05:20.182 ******** 2026-03-19 04:41:33.630820 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630831 | orchestrator | 2026-03-19 04:41:33.630842 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:41:33.630853 | orchestrator | Thursday 19 March 2026 04:41:27 +0000 (0:00:00.146) 0:05:20.328 ******** 2026-03-19 04:41:33.630864 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630874 | orchestrator | 2026-03-19 04:41:33.630885 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:41:33.630896 | orchestrator | Thursday 19 March 2026 04:41:27 +0000 (0:00:00.110) 0:05:20.439 ******** 2026-03-19 04:41:33.630906 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630917 | orchestrator | 2026-03-19 04:41:33.630928 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:41:33.630938 | orchestrator | Thursday 19 March 2026 04:41:27 +0000 (0:00:00.123) 0:05:20.562 ******** 2026-03-19 04:41:33.630949 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.630960 | orchestrator | 2026-03-19 04:41:33.630970 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:41:33.630981 | orchestrator | Thursday 19 March 2026 04:41:27 +0000 (0:00:00.420) 0:05:20.982 ******** 2026-03-19 04:41:33.630992 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.631003 | orchestrator | 2026-03-19 04:41:33.631013 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:41:33.631024 | orchestrator | Thursday 19 March 2026 04:41:28 +0000 (0:00:00.961) 0:05:21.944 ******** 2026-03-19 04:41:33.631035 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.631045 | orchestrator | 2026-03-19 04:41:33.631060 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:41:33.631078 | orchestrator | Thursday 19 March 2026 04:41:30 +0000 (0:00:01.512) 0:05:23.456 ******** 2026-03-19 04:41:33.631107 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-19 04:41:33.631169 | orchestrator | 2026-03-19 04:41:33.631189 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:41:33.631208 | orchestrator | Thursday 19 March 2026 04:41:30 +0000 (0:00:00.212) 0:05:23.668 ******** 2026-03-19 04:41:33.631228 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.631247 | orchestrator | 2026-03-19 04:41:33.631265 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:41:33.631284 | orchestrator | Thursday 19 March 2026 04:41:30 +0000 (0:00:00.142) 0:05:23.811 ******** 2026-03-19 04:41:33.631295 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.631306 | orchestrator | 2026-03-19 04:41:33.631316 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:41:33.631327 | orchestrator | Thursday 19 March 2026 04:41:30 +0000 (0:00:00.132) 0:05:23.943 ******** 2026-03-19 04:41:33.631338 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:41:33.631348 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:41:33.631361 | orchestrator | 2026-03-19 04:41:33.631386 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:41:33.631409 | orchestrator | Thursday 19 March 2026 04:41:31 +0000 (0:00:00.864) 0:05:24.808 ******** 2026-03-19 04:41:33.631429 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.631448 | orchestrator | 2026-03-19 04:41:33.631465 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:41:33.631481 | orchestrator | Thursday 19 March 2026 04:41:31 +0000 (0:00:00.447) 0:05:25.255 ******** 2026-03-19 04:41:33.631492 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.631502 | orchestrator | 2026-03-19 04:41:33.631513 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:41:33.631524 | orchestrator | Thursday 19 March 2026 04:41:32 +0000 (0:00:00.149) 0:05:25.405 ******** 2026-03-19 04:41:33.631546 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.631557 | orchestrator | 2026-03-19 04:41:33.631567 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:41:33.631578 | orchestrator | Thursday 19 March 2026 04:41:32 +0000 (0:00:00.125) 0:05:25.531 ******** 2026-03-19 04:41:33.631589 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:33.631607 | orchestrator | 2026-03-19 04:41:33.631625 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:41:33.631643 | orchestrator | Thursday 19 March 2026 04:41:32 +0000 (0:00:00.127) 0:05:25.658 ******** 2026-03-19 04:41:33.631663 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-19 04:41:33.631681 | orchestrator | 2026-03-19 04:41:33.631701 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:41:33.631719 | orchestrator | Thursday 19 March 2026 04:41:32 +0000 (0:00:00.429) 0:05:26.088 ******** 2026-03-19 04:41:33.631738 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:33.631757 | orchestrator | 2026-03-19 04:41:33.631776 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:41:33.631807 | orchestrator | Thursday 19 March 2026 04:41:33 +0000 (0:00:00.791) 0:05:26.879 ******** 2026-03-19 04:41:46.524168 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:41:46.524269 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:41:46.524281 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:41:46.524291 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524301 | orchestrator | 2026-03-19 04:41:46.524310 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:41:46.524318 | orchestrator | Thursday 19 March 2026 04:41:33 +0000 (0:00:00.163) 0:05:27.043 ******** 2026-03-19 04:41:46.524326 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524334 | orchestrator | 2026-03-19 04:41:46.524355 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:41:46.524364 | orchestrator | Thursday 19 March 2026 04:41:33 +0000 (0:00:00.133) 0:05:27.176 ******** 2026-03-19 04:41:46.524372 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524380 | orchestrator | 2026-03-19 04:41:46.524387 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:41:46.524395 | orchestrator | Thursday 19 March 2026 04:41:34 +0000 (0:00:00.196) 0:05:27.373 ******** 2026-03-19 04:41:46.524403 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524411 | orchestrator | 2026-03-19 04:41:46.524419 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:41:46.524427 | orchestrator | Thursday 19 March 2026 04:41:34 +0000 (0:00:00.138) 0:05:27.512 ******** 2026-03-19 04:41:46.524435 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524443 | orchestrator | 2026-03-19 04:41:46.524451 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:41:46.524460 | orchestrator | Thursday 19 March 2026 04:41:34 +0000 (0:00:00.146) 0:05:27.658 ******** 2026-03-19 04:41:46.524467 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524475 | orchestrator | 2026-03-19 04:41:46.524483 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:41:46.524491 | orchestrator | Thursday 19 March 2026 04:41:34 +0000 (0:00:00.145) 0:05:27.803 ******** 2026-03-19 04:41:46.524499 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:46.524508 | orchestrator | 2026-03-19 04:41:46.524516 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:41:46.524524 | orchestrator | Thursday 19 March 2026 04:41:36 +0000 (0:00:01.702) 0:05:29.506 ******** 2026-03-19 04:41:46.524532 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:46.524540 | orchestrator | 2026-03-19 04:41:46.524547 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:41:46.524573 | orchestrator | Thursday 19 March 2026 04:41:36 +0000 (0:00:00.135) 0:05:29.642 ******** 2026-03-19 04:41:46.524581 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-19 04:41:46.524589 | orchestrator | 2026-03-19 04:41:46.524597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:41:46.524605 | orchestrator | Thursday 19 March 2026 04:41:36 +0000 (0:00:00.202) 0:05:29.844 ******** 2026-03-19 04:41:46.524613 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524621 | orchestrator | 2026-03-19 04:41:46.524629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:41:46.524637 | orchestrator | Thursday 19 March 2026 04:41:36 +0000 (0:00:00.156) 0:05:30.001 ******** 2026-03-19 04:41:46.524644 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524652 | orchestrator | 2026-03-19 04:41:46.524660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:41:46.524668 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.403) 0:05:30.405 ******** 2026-03-19 04:41:46.524676 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524684 | orchestrator | 2026-03-19 04:41:46.524693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:41:46.524702 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.161) 0:05:30.566 ******** 2026-03-19 04:41:46.524712 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524721 | orchestrator | 2026-03-19 04:41:46.524730 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:41:46.524739 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.133) 0:05:30.700 ******** 2026-03-19 04:41:46.524748 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524757 | orchestrator | 2026-03-19 04:41:46.524766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:41:46.524775 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.152) 0:05:30.852 ******** 2026-03-19 04:41:46.524785 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524794 | orchestrator | 2026-03-19 04:41:46.524803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:41:46.524813 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.146) 0:05:30.999 ******** 2026-03-19 04:41:46.524821 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524830 | orchestrator | 2026-03-19 04:41:46.524839 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:41:46.524848 | orchestrator | Thursday 19 March 2026 04:41:37 +0000 (0:00:00.148) 0:05:31.147 ******** 2026-03-19 04:41:46.524857 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.524866 | orchestrator | 2026-03-19 04:41:46.524874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:41:46.524883 | orchestrator | Thursday 19 March 2026 04:41:38 +0000 (0:00:00.172) 0:05:31.319 ******** 2026-03-19 04:41:46.524892 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:41:46.524902 | orchestrator | 2026-03-19 04:41:46.524911 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:41:46.524920 | orchestrator | Thursday 19 March 2026 04:41:38 +0000 (0:00:00.234) 0:05:31.554 ******** 2026-03-19 04:41:46.524928 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-19 04:41:46.524938 | orchestrator | 2026-03-19 04:41:46.524947 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:41:46.524968 | orchestrator | Thursday 19 March 2026 04:41:38 +0000 (0:00:00.201) 0:05:31.755 ******** 2026-03-19 04:41:46.524978 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-19 04:41:46.524987 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-19 04:41:46.524996 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-19 04:41:46.525005 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-19 04:41:46.525020 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-19 04:41:46.525029 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-19 04:41:46.525039 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-19 04:41:46.525051 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:41:46.525059 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:41:46.525067 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:41:46.525075 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:41:46.525083 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:41:46.525091 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:41:46.525099 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:41:46.525106 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-19 04:41:46.525138 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-19 04:41:46.525146 | orchestrator | 2026-03-19 04:41:46.525155 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:41:46.525162 | orchestrator | Thursday 19 March 2026 04:41:44 +0000 (0:00:05.850) 0:05:37.606 ******** 2026-03-19 04:41:46.525170 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525178 | orchestrator | 2026-03-19 04:41:46.525186 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:41:46.525194 | orchestrator | Thursday 19 March 2026 04:41:44 +0000 (0:00:00.135) 0:05:37.742 ******** 2026-03-19 04:41:46.525202 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525210 | orchestrator | 2026-03-19 04:41:46.525218 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:41:46.525226 | orchestrator | Thursday 19 March 2026 04:41:44 +0000 (0:00:00.361) 0:05:38.104 ******** 2026-03-19 04:41:46.525234 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525242 | orchestrator | 2026-03-19 04:41:46.525249 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:41:46.525257 | orchestrator | Thursday 19 March 2026 04:41:44 +0000 (0:00:00.129) 0:05:38.233 ******** 2026-03-19 04:41:46.525265 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525273 | orchestrator | 2026-03-19 04:41:46.525281 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:41:46.525289 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.129) 0:05:38.363 ******** 2026-03-19 04:41:46.525297 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525305 | orchestrator | 2026-03-19 04:41:46.525313 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:41:46.525320 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.137) 0:05:38.501 ******** 2026-03-19 04:41:46.525328 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525347 | orchestrator | 2026-03-19 04:41:46.525355 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:41:46.525372 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.139) 0:05:38.640 ******** 2026-03-19 04:41:46.525380 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525388 | orchestrator | 2026-03-19 04:41:46.525396 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:41:46.525404 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.127) 0:05:38.768 ******** 2026-03-19 04:41:46.525412 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525420 | orchestrator | 2026-03-19 04:41:46.525428 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:41:46.525436 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.125) 0:05:38.894 ******** 2026-03-19 04:41:46.525444 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525457 | orchestrator | 2026-03-19 04:41:46.525465 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:41:46.525473 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.129) 0:05:39.023 ******** 2026-03-19 04:41:46.525481 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525489 | orchestrator | 2026-03-19 04:41:46.525497 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:41:46.525505 | orchestrator | Thursday 19 March 2026 04:41:45 +0000 (0:00:00.122) 0:05:39.145 ******** 2026-03-19 04:41:46.525513 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525521 | orchestrator | 2026-03-19 04:41:46.525529 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:41:46.525537 | orchestrator | Thursday 19 March 2026 04:41:46 +0000 (0:00:00.134) 0:05:39.280 ******** 2026-03-19 04:41:46.525545 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525552 | orchestrator | 2026-03-19 04:41:46.525560 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:41:46.525568 | orchestrator | Thursday 19 March 2026 04:41:46 +0000 (0:00:00.136) 0:05:39.416 ******** 2026-03-19 04:41:46.525576 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525584 | orchestrator | 2026-03-19 04:41:46.525592 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:41:46.525600 | orchestrator | Thursday 19 March 2026 04:41:46 +0000 (0:00:00.222) 0:05:39.638 ******** 2026-03-19 04:41:46.525608 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:41:46.525616 | orchestrator | 2026-03-19 04:41:46.525624 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:41:46.525637 | orchestrator | Thursday 19 March 2026 04:41:46 +0000 (0:00:00.136) 0:05:39.775 ******** 2026-03-19 04:42:04.994769 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.994876 | orchestrator | 2026-03-19 04:42:04.994891 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:42:04.994903 | orchestrator | Thursday 19 March 2026 04:41:46 +0000 (0:00:00.215) 0:05:39.990 ******** 2026-03-19 04:42:04.994914 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.994924 | orchestrator | 2026-03-19 04:42:04.994934 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:42:04.994944 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.410) 0:05:40.401 ******** 2026-03-19 04:42:04.994954 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.994964 | orchestrator | 2026-03-19 04:42:04.994989 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:42:04.995001 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.126) 0:05:40.527 ******** 2026-03-19 04:42:04.995011 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995021 | orchestrator | 2026-03-19 04:42:04.995031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:42:04.995040 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.142) 0:05:40.670 ******** 2026-03-19 04:42:04.995050 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995060 | orchestrator | 2026-03-19 04:42:04.995070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:42:04.995132 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.148) 0:05:40.819 ******** 2026-03-19 04:42:04.995152 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995169 | orchestrator | 2026-03-19 04:42:04.995182 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:42:04.995192 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.146) 0:05:40.965 ******** 2026-03-19 04:42:04.995201 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995211 | orchestrator | 2026-03-19 04:42:04.995221 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:42:04.995231 | orchestrator | Thursday 19 March 2026 04:41:47 +0000 (0:00:00.126) 0:05:41.092 ******** 2026-03-19 04:42:04.995266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:42:04.995277 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:42:04.995287 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:42:04.995296 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995306 | orchestrator | 2026-03-19 04:42:04.995316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:42:04.995326 | orchestrator | Thursday 19 March 2026 04:41:48 +0000 (0:00:00.408) 0:05:41.500 ******** 2026-03-19 04:42:04.995336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:42:04.995346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:42:04.995356 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:42:04.995365 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995375 | orchestrator | 2026-03-19 04:42:04.995384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:42:04.995394 | orchestrator | Thursday 19 March 2026 04:41:48 +0000 (0:00:00.395) 0:05:41.896 ******** 2026-03-19 04:42:04.995404 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:42:04.995413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:42:04.995423 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:42:04.995432 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995447 | orchestrator | 2026-03-19 04:42:04.995463 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:42:04.995478 | orchestrator | Thursday 19 March 2026 04:41:49 +0000 (0:00:00.387) 0:05:42.283 ******** 2026-03-19 04:42:04.995494 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995510 | orchestrator | 2026-03-19 04:42:04.995525 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:42:04.995540 | orchestrator | Thursday 19 March 2026 04:41:49 +0000 (0:00:00.141) 0:05:42.424 ******** 2026-03-19 04:42:04.995557 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-19 04:42:04.995574 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995591 | orchestrator | 2026-03-19 04:42:04.995607 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:42:04.995624 | orchestrator | Thursday 19 March 2026 04:41:49 +0000 (0:00:00.304) 0:05:42.729 ******** 2026-03-19 04:42:04.995634 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.995644 | orchestrator | 2026-03-19 04:42:04.995654 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:42:04.995664 | orchestrator | Thursday 19 March 2026 04:41:50 +0000 (0:00:01.123) 0:05:43.852 ******** 2026-03-19 04:42:04.995673 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.995683 | orchestrator | 2026-03-19 04:42:04.995693 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-19 04:42:04.995703 | orchestrator | Thursday 19 March 2026 04:41:50 +0000 (0:00:00.167) 0:05:44.020 ******** 2026-03-19 04:42:04.995713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-19 04:42:04.995723 | orchestrator | 2026-03-19 04:42:04.995733 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-19 04:42:04.995742 | orchestrator | Thursday 19 March 2026 04:41:51 +0000 (0:00:00.250) 0:05:44.270 ******** 2026-03-19 04:42:04.995752 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-19 04:42:04.995761 | orchestrator | 2026-03-19 04:42:04.995771 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-19 04:42:04.995781 | orchestrator | Thursday 19 March 2026 04:41:53 +0000 (0:00:02.174) 0:05:46.445 ******** 2026-03-19 04:42:04.995791 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.995800 | orchestrator | 2026-03-19 04:42:04.995810 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-19 04:42:04.995836 | orchestrator | Thursday 19 March 2026 04:41:53 +0000 (0:00:00.167) 0:05:46.613 ******** 2026-03-19 04:42:04.995856 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.995866 | orchestrator | 2026-03-19 04:42:04.995876 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-19 04:42:04.995885 | orchestrator | Thursday 19 March 2026 04:41:53 +0000 (0:00:00.162) 0:05:46.775 ******** 2026-03-19 04:42:04.995895 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.995905 | orchestrator | 2026-03-19 04:42:04.995914 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-19 04:42:04.995924 | orchestrator | Thursday 19 March 2026 04:41:53 +0000 (0:00:00.161) 0:05:46.937 ******** 2026-03-19 04:42:04.995941 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:42:04.995951 | orchestrator | 2026-03-19 04:42:04.995961 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-19 04:42:04.995970 | orchestrator | Thursday 19 March 2026 04:41:54 +0000 (0:00:01.078) 0:05:48.015 ******** 2026-03-19 04:42:04.995980 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.995990 | orchestrator | 2026-03-19 04:42:04.995999 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-19 04:42:04.996009 | orchestrator | Thursday 19 March 2026 04:41:55 +0000 (0:00:00.608) 0:05:48.624 ******** 2026-03-19 04:42:04.996019 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996028 | orchestrator | 2026-03-19 04:42:04.996038 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-19 04:42:04.996048 | orchestrator | Thursday 19 March 2026 04:41:55 +0000 (0:00:00.541) 0:05:49.165 ******** 2026-03-19 04:42:04.996057 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996067 | orchestrator | 2026-03-19 04:42:04.996076 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-19 04:42:04.996114 | orchestrator | Thursday 19 March 2026 04:41:56 +0000 (0:00:00.487) 0:05:49.653 ******** 2026-03-19 04:42:04.996127 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:42:04.996137 | orchestrator | 2026-03-19 04:42:04.996146 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-19 04:42:04.996156 | orchestrator | Thursday 19 March 2026 04:41:57 +0000 (0:00:00.668) 0:05:50.321 ******** 2026-03-19 04:42:04.996166 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:42:04.996176 | orchestrator | 2026-03-19 04:42:04.996185 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-19 04:42:04.996195 | orchestrator | Thursday 19 March 2026 04:41:58 +0000 (0:00:01.183) 0:05:51.505 ******** 2026-03-19 04:42:04.996204 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:42:04.996214 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-19 04:42:04.996224 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 04:42:04.996234 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-19 04:42:04.996244 | orchestrator | 2026-03-19 04:42:04.996253 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-19 04:42:04.996263 | orchestrator | Thursday 19 March 2026 04:42:01 +0000 (0:00:02.997) 0:05:54.503 ******** 2026-03-19 04:42:04.996273 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:42:04.996282 | orchestrator | 2026-03-19 04:42:04.996292 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-19 04:42:04.996302 | orchestrator | Thursday 19 March 2026 04:42:02 +0000 (0:00:01.076) 0:05:55.579 ******** 2026-03-19 04:42:04.996311 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996321 | orchestrator | 2026-03-19 04:42:04.996331 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-19 04:42:04.996340 | orchestrator | Thursday 19 March 2026 04:42:02 +0000 (0:00:00.136) 0:05:55.715 ******** 2026-03-19 04:42:04.996353 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996369 | orchestrator | 2026-03-19 04:42:04.996385 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-19 04:42:04.996411 | orchestrator | Thursday 19 March 2026 04:42:02 +0000 (0:00:00.146) 0:05:55.862 ******** 2026-03-19 04:42:04.996427 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996442 | orchestrator | 2026-03-19 04:42:04.996458 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-19 04:42:04.996474 | orchestrator | Thursday 19 March 2026 04:42:03 +0000 (0:00:00.906) 0:05:56.769 ******** 2026-03-19 04:42:04.996489 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:04.996504 | orchestrator | 2026-03-19 04:42:04.996519 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-19 04:42:04.996534 | orchestrator | Thursday 19 March 2026 04:42:03 +0000 (0:00:00.446) 0:05:57.215 ******** 2026-03-19 04:42:04.996550 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.996566 | orchestrator | 2026-03-19 04:42:04.996581 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-19 04:42:04.996598 | orchestrator | Thursday 19 March 2026 04:42:04 +0000 (0:00:00.130) 0:05:57.345 ******** 2026-03-19 04:42:04.996615 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-19 04:42:04.996631 | orchestrator | 2026-03-19 04:42:04.996648 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-19 04:42:04.996660 | orchestrator | Thursday 19 March 2026 04:42:04 +0000 (0:00:00.198) 0:05:57.544 ******** 2026-03-19 04:42:04.996669 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.996679 | orchestrator | 2026-03-19 04:42:04.996688 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-19 04:42:04.996698 | orchestrator | Thursday 19 March 2026 04:42:04 +0000 (0:00:00.127) 0:05:57.672 ******** 2026-03-19 04:42:04.996708 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:04.996717 | orchestrator | 2026-03-19 04:42:04.996727 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-19 04:42:04.996736 | orchestrator | Thursday 19 March 2026 04:42:04 +0000 (0:00:00.123) 0:05:57.796 ******** 2026-03-19 04:42:04.996746 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-19 04:42:04.996756 | orchestrator | 2026-03-19 04:42:04.996775 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-19 04:42:52.820278 | orchestrator | Thursday 19 March 2026 04:42:04 +0000 (0:00:00.447) 0:05:58.243 ******** 2026-03-19 04:42:52.820393 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.820408 | orchestrator | 2026-03-19 04:42:52.820420 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-19 04:42:52.820430 | orchestrator | Thursday 19 March 2026 04:42:06 +0000 (0:00:01.378) 0:05:59.622 ******** 2026-03-19 04:42:52.820440 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.820450 | orchestrator | 2026-03-19 04:42:52.820460 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-19 04:42:52.820484 | orchestrator | Thursday 19 March 2026 04:42:07 +0000 (0:00:00.937) 0:06:00.559 ******** 2026-03-19 04:42:52.820495 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.820504 | orchestrator | 2026-03-19 04:42:52.820514 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-19 04:42:52.820524 | orchestrator | Thursday 19 March 2026 04:42:08 +0000 (0:00:01.429) 0:06:01.988 ******** 2026-03-19 04:42:52.820533 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:42:52.820544 | orchestrator | 2026-03-19 04:42:52.820553 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-19 04:42:52.820563 | orchestrator | Thursday 19 March 2026 04:42:11 +0000 (0:00:02.318) 0:06:04.307 ******** 2026-03-19 04:42:52.820573 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-19 04:42:52.820583 | orchestrator | 2026-03-19 04:42:52.820593 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-19 04:42:52.820602 | orchestrator | Thursday 19 March 2026 04:42:11 +0000 (0:00:00.223) 0:06:04.530 ******** 2026-03-19 04:42:52.820612 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-19 04:42:52.820644 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.820655 | orchestrator | 2026-03-19 04:42:52.820664 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-19 04:42:52.820674 | orchestrator | Thursday 19 March 2026 04:42:33 +0000 (0:00:21.876) 0:06:26.406 ******** 2026-03-19 04:42:52.820684 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.820693 | orchestrator | 2026-03-19 04:42:52.820703 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-19 04:42:52.820712 | orchestrator | Thursday 19 March 2026 04:42:35 +0000 (0:00:02.066) 0:06:28.473 ******** 2026-03-19 04:42:52.820722 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:52.820732 | orchestrator | 2026-03-19 04:42:52.820741 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-19 04:42:52.820751 | orchestrator | Thursday 19 March 2026 04:42:35 +0000 (0:00:00.133) 0:06:28.607 ******** 2026-03-19 04:42:52.820763 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:42:52.820776 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:42:52.820786 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 04:42:52.820798 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 04:42:52.820816 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 04:42:52.820854 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}])  2026-03-19 04:42:52.820873 | orchestrator | 2026-03-19 04:42:52.820890 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-19 04:42:52.820906 | orchestrator | Thursday 19 March 2026 04:42:44 +0000 (0:00:09.158) 0:06:37.765 ******** 2026-03-19 04:42:52.820923 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:42:52.820939 | orchestrator | 2026-03-19 04:42:52.820955 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:42:52.820972 | orchestrator | Thursday 19 March 2026 04:42:46 +0000 (0:00:01.509) 0:06:39.275 ******** 2026-03-19 04:42:52.820997 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:42:52.821053 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-19 04:42:52.821074 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-19 04:42:52.821091 | orchestrator | 2026-03-19 04:42:52.821108 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:42:52.821124 | orchestrator | Thursday 19 March 2026 04:42:47 +0000 (0:00:01.374) 0:06:40.649 ******** 2026-03-19 04:42:52.821140 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:42:52.821157 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:42:52.821174 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:42:52.821189 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:52.821204 | orchestrator | 2026-03-19 04:42:52.821220 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-19 04:42:52.821236 | orchestrator | Thursday 19 March 2026 04:42:47 +0000 (0:00:00.492) 0:06:41.141 ******** 2026-03-19 04:42:52.821253 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:42:52.821269 | orchestrator | 2026-03-19 04:42:52.821285 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-19 04:42:52.821302 | orchestrator | Thursday 19 March 2026 04:42:48 +0000 (0:00:00.142) 0:06:41.284 ******** 2026-03-19 04:42:52.821318 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:42:52.821334 | orchestrator | 2026-03-19 04:42:52.821350 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-19 04:42:52.821364 | orchestrator | 2026-03-19 04:42:52.821374 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-19 04:42:52.821383 | orchestrator | Thursday 19 March 2026 04:42:49 +0000 (0:00:01.756) 0:06:43.041 ******** 2026-03-19 04:42:52.821393 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821402 | orchestrator | 2026-03-19 04:42:52.821412 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-19 04:42:52.821422 | orchestrator | Thursday 19 March 2026 04:42:50 +0000 (0:00:00.512) 0:06:43.553 ******** 2026-03-19 04:42:52.821431 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821441 | orchestrator | 2026-03-19 04:42:52.821450 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-19 04:42:52.821460 | orchestrator | Thursday 19 March 2026 04:42:50 +0000 (0:00:00.135) 0:06:43.688 ******** 2026-03-19 04:42:52.821470 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:42:52.821480 | orchestrator | 2026-03-19 04:42:52.821489 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-19 04:42:52.821499 | orchestrator | Thursday 19 March 2026 04:42:50 +0000 (0:00:00.113) 0:06:43.801 ******** 2026-03-19 04:42:52.821509 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821518 | orchestrator | 2026-03-19 04:42:52.821528 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:42:52.821537 | orchestrator | Thursday 19 March 2026 04:42:50 +0000 (0:00:00.147) 0:06:43.949 ******** 2026-03-19 04:42:52.821547 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-19 04:42:52.821556 | orchestrator | 2026-03-19 04:42:52.821566 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:42:52.821575 | orchestrator | Thursday 19 March 2026 04:42:50 +0000 (0:00:00.244) 0:06:44.193 ******** 2026-03-19 04:42:52.821585 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821595 | orchestrator | 2026-03-19 04:42:52.821604 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:42:52.821614 | orchestrator | Thursday 19 March 2026 04:42:51 +0000 (0:00:00.459) 0:06:44.653 ******** 2026-03-19 04:42:52.821623 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821633 | orchestrator | 2026-03-19 04:42:52.821642 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:42:52.821652 | orchestrator | Thursday 19 March 2026 04:42:51 +0000 (0:00:00.381) 0:06:45.035 ******** 2026-03-19 04:42:52.821676 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821685 | orchestrator | 2026-03-19 04:42:52.821695 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:42:52.821704 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.456) 0:06:45.491 ******** 2026-03-19 04:42:52.821714 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821724 | orchestrator | 2026-03-19 04:42:52.821733 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:42:52.821743 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.150) 0:06:45.642 ******** 2026-03-19 04:42:52.821752 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821762 | orchestrator | 2026-03-19 04:42:52.821771 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:42:52.821781 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.141) 0:06:45.783 ******** 2026-03-19 04:42:52.821790 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:42:52.821800 | orchestrator | 2026-03-19 04:42:52.821809 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:42:52.821819 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.140) 0:06:45.924 ******** 2026-03-19 04:42:52.821829 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:42:52.821838 | orchestrator | 2026-03-19 04:42:52.821848 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:42:52.821870 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.148) 0:06:46.072 ******** 2026-03-19 04:43:00.800887 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:00.801004 | orchestrator | 2026-03-19 04:43:00.801095 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:43:00.801119 | orchestrator | Thursday 19 March 2026 04:42:52 +0000 (0:00:00.131) 0:06:46.204 ******** 2026-03-19 04:43:00.801140 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:43:00.801160 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:43:00.801196 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:00.801209 | orchestrator | 2026-03-19 04:43:00.801220 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:43:00.801231 | orchestrator | Thursday 19 March 2026 04:42:53 +0000 (0:00:00.645) 0:06:46.849 ******** 2026-03-19 04:43:00.801243 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:00.801254 | orchestrator | 2026-03-19 04:43:00.801265 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:43:00.801276 | orchestrator | Thursday 19 March 2026 04:42:53 +0000 (0:00:00.267) 0:06:47.116 ******** 2026-03-19 04:43:00.801288 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:43:00.801299 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:43:00.801310 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:00.801321 | orchestrator | 2026-03-19 04:43:00.801332 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:43:00.801343 | orchestrator | Thursday 19 March 2026 04:42:56 +0000 (0:00:02.187) 0:06:49.304 ******** 2026-03-19 04:43:00.801354 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:43:00.801366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:43:00.801377 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:43:00.801388 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.801400 | orchestrator | 2026-03-19 04:43:00.801414 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:43:00.801426 | orchestrator | Thursday 19 March 2026 04:42:56 +0000 (0:00:00.432) 0:06:49.736 ******** 2026-03-19 04:43:00.801442 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801512 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.801525 | orchestrator | 2026-03-19 04:43:00.801538 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:43:00.801551 | orchestrator | Thursday 19 March 2026 04:42:57 +0000 (0:00:00.891) 0:06:50.627 ******** 2026-03-19 04:43:00.801564 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801578 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:00.801601 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.801612 | orchestrator | 2026-03-19 04:43:00.801623 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:43:00.801634 | orchestrator | Thursday 19 March 2026 04:42:57 +0000 (0:00:00.157) 0:06:50.785 ******** 2026-03-19 04:43:00.801673 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:42:54.371892', 'end': '2026-03-19 04:42:54.416984', 'delta': '0:00:00.045092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:43:00.801689 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:42:54.922034', 'end': '2026-03-19 04:42:54.973376', 'delta': '0:00:00.051342', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:43:00.801713 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '115813b5cae5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:42:55.849227', 'end': '2026-03-19 04:42:55.901350', 'delta': '0:00:00.052123', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['115813b5cae5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:43:00.801725 | orchestrator | 2026-03-19 04:43:00.801737 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:43:00.801748 | orchestrator | Thursday 19 March 2026 04:42:57 +0000 (0:00:00.188) 0:06:50.973 ******** 2026-03-19 04:43:00.801759 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:00.801771 | orchestrator | 2026-03-19 04:43:00.801781 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:43:00.801792 | orchestrator | Thursday 19 March 2026 04:42:58 +0000 (0:00:00.829) 0:06:51.803 ******** 2026-03-19 04:43:00.801803 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.801814 | orchestrator | 2026-03-19 04:43:00.801825 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:43:00.801836 | orchestrator | Thursday 19 March 2026 04:42:58 +0000 (0:00:00.262) 0:06:52.066 ******** 2026-03-19 04:43:00.801847 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:00.801858 | orchestrator | 2026-03-19 04:43:00.801869 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:43:00.801880 | orchestrator | Thursday 19 March 2026 04:42:58 +0000 (0:00:00.148) 0:06:52.214 ******** 2026-03-19 04:43:00.801890 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-03-19 04:43:00.801901 | orchestrator | 2026-03-19 04:43:00.801912 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:43:00.801923 | orchestrator | Thursday 19 March 2026 04:42:59 +0000 (0:00:01.043) 0:06:53.258 ******** 2026-03-19 04:43:00.801934 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:00.801945 | orchestrator | 2026-03-19 04:43:00.801956 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:43:00.801967 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.156) 0:06:53.414 ******** 2026-03-19 04:43:00.801978 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.801989 | orchestrator | 2026-03-19 04:43:00.802000 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:43:00.802011 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.151) 0:06:53.565 ******** 2026-03-19 04:43:00.802131 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.802145 | orchestrator | 2026-03-19 04:43:00.802156 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:43:00.802167 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.237) 0:06:53.803 ******** 2026-03-19 04:43:00.802178 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.802189 | orchestrator | 2026-03-19 04:43:00.802199 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:43:00.802211 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.117) 0:06:53.921 ******** 2026-03-19 04:43:00.802222 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:00.802232 | orchestrator | 2026-03-19 04:43:00.802244 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:43:00.802263 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.135) 0:06:54.056 ******** 2026-03-19 04:43:02.228126 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228232 | orchestrator | 2026-03-19 04:43:02.228251 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:43:02.228266 | orchestrator | Thursday 19 March 2026 04:43:00 +0000 (0:00:00.135) 0:06:54.192 ******** 2026-03-19 04:43:02.228280 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228293 | orchestrator | 2026-03-19 04:43:02.228306 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:43:02.228336 | orchestrator | Thursday 19 March 2026 04:43:01 +0000 (0:00:00.131) 0:06:54.323 ******** 2026-03-19 04:43:02.228349 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228362 | orchestrator | 2026-03-19 04:43:02.228374 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:43:02.228382 | orchestrator | Thursday 19 March 2026 04:43:01 +0000 (0:00:00.112) 0:06:54.435 ******** 2026-03-19 04:43:02.228390 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228398 | orchestrator | 2026-03-19 04:43:02.228405 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:43:02.228414 | orchestrator | Thursday 19 March 2026 04:43:01 +0000 (0:00:00.134) 0:06:54.570 ******** 2026-03-19 04:43:02.228426 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228441 | orchestrator | 2026-03-19 04:43:02.228460 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:43:02.228471 | orchestrator | Thursday 19 March 2026 04:43:01 +0000 (0:00:00.383) 0:06:54.953 ******** 2026-03-19 04:43:02.228485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:43:02.228540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:43:02.228647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:43:02.228671 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:02.228683 | orchestrator | 2026-03-19 04:43:02.228695 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:43:02.228716 | orchestrator | Thursday 19 March 2026 04:43:01 +0000 (0:00:00.247) 0:06:55.201 ******** 2026-03-19 04:43:02.228728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:02.228754 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942378 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942509 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942529 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942542 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942623 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942639 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942651 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:43:03.942671 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:03.942686 | orchestrator | 2026-03-19 04:43:03.942698 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:43:03.942710 | orchestrator | Thursday 19 March 2026 04:43:02 +0000 (0:00:00.280) 0:06:55.481 ******** 2026-03-19 04:43:03.942721 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:03.942733 | orchestrator | 2026-03-19 04:43:03.942744 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:43:03.942755 | orchestrator | Thursday 19 March 2026 04:43:02 +0000 (0:00:00.546) 0:06:56.028 ******** 2026-03-19 04:43:03.942766 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:03.942777 | orchestrator | 2026-03-19 04:43:03.942788 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:43:03.942798 | orchestrator | Thursday 19 March 2026 04:43:02 +0000 (0:00:00.118) 0:06:56.147 ******** 2026-03-19 04:43:03.942813 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:03.942831 | orchestrator | 2026-03-19 04:43:03.942876 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:43:03.942916 | orchestrator | Thursday 19 March 2026 04:43:03 +0000 (0:00:00.509) 0:06:56.656 ******** 2026-03-19 04:43:03.942934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:03.942953 | orchestrator | 2026-03-19 04:43:03.942971 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:43:03.942990 | orchestrator | Thursday 19 March 2026 04:43:03 +0000 (0:00:00.144) 0:06:56.801 ******** 2026-03-19 04:43:03.943007 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:03.943055 | orchestrator | 2026-03-19 04:43:03.943075 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:43:03.943093 | orchestrator | Thursday 19 March 2026 04:43:03 +0000 (0:00:00.247) 0:06:57.049 ******** 2026-03-19 04:43:03.943113 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:03.943130 | orchestrator | 2026-03-19 04:43:03.943149 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:43:03.943190 | orchestrator | Thursday 19 March 2026 04:43:03 +0000 (0:00:00.147) 0:06:57.196 ******** 2026-03-19 04:43:20.430842 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-19 04:43:20.430952 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-19 04:43:20.430966 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:20.430977 | orchestrator | 2026-03-19 04:43:20.430988 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:43:20.431043 | orchestrator | Thursday 19 March 2026 04:43:04 +0000 (0:00:00.942) 0:06:58.139 ******** 2026-03-19 04:43:20.431055 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:43:20.431065 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:43:20.431074 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:43:20.431083 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431092 | orchestrator | 2026-03-19 04:43:20.431102 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:43:20.431111 | orchestrator | Thursday 19 March 2026 04:43:05 +0000 (0:00:00.160) 0:06:58.299 ******** 2026-03-19 04:43:20.431120 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431129 | orchestrator | 2026-03-19 04:43:20.431137 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:43:20.431147 | orchestrator | Thursday 19 March 2026 04:43:05 +0000 (0:00:00.145) 0:06:58.444 ******** 2026-03-19 04:43:20.431155 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:43:20.431165 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:43:20.431195 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:20.431204 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:43:20.431213 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:43:20.431222 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:43:20.431230 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:43:20.431239 | orchestrator | 2026-03-19 04:43:20.431248 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:43:20.431257 | orchestrator | Thursday 19 March 2026 04:43:06 +0000 (0:00:01.119) 0:06:59.564 ******** 2026-03-19 04:43:20.431265 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:43:20.431274 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:43:20.431283 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:20.431291 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:43:20.431300 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:43:20.431309 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:43:20.431318 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:43:20.431326 | orchestrator | 2026-03-19 04:43:20.431335 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-19 04:43:20.431345 | orchestrator | Thursday 19 March 2026 04:43:08 +0000 (0:00:02.027) 0:07:01.591 ******** 2026-03-19 04:43:20.431355 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431365 | orchestrator | 2026-03-19 04:43:20.431375 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-19 04:43:20.431385 | orchestrator | Thursday 19 March 2026 04:43:08 +0000 (0:00:00.253) 0:07:01.845 ******** 2026-03-19 04:43:20.431396 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431405 | orchestrator | 2026-03-19 04:43:20.431416 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-19 04:43:20.431425 | orchestrator | Thursday 19 March 2026 04:43:08 +0000 (0:00:00.223) 0:07:02.069 ******** 2026-03-19 04:43:20.431436 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431446 | orchestrator | 2026-03-19 04:43:20.431455 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-19 04:43:20.431466 | orchestrator | Thursday 19 March 2026 04:43:08 +0000 (0:00:00.133) 0:07:02.202 ******** 2026-03-19 04:43:20.431476 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431486 | orchestrator | 2026-03-19 04:43:20.431496 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-19 04:43:20.431506 | orchestrator | Thursday 19 March 2026 04:43:09 +0000 (0:00:00.212) 0:07:02.415 ******** 2026-03-19 04:43:20.431517 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431526 | orchestrator | 2026-03-19 04:43:20.431536 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-19 04:43:20.431547 | orchestrator | Thursday 19 March 2026 04:43:09 +0000 (0:00:00.139) 0:07:02.555 ******** 2026-03-19 04:43:20.431557 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:43:20.431567 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:43:20.431578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:43:20.431588 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431597 | orchestrator | 2026-03-19 04:43:20.431608 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-19 04:43:20.431618 | orchestrator | Thursday 19 March 2026 04:43:09 +0000 (0:00:00.409) 0:07:02.964 ******** 2026-03-19 04:43:20.431628 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-19 04:43:20.431645 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-19 04:43:20.431685 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-19 04:43:20.431696 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-19 04:43:20.431706 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-19 04:43:20.431715 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-19 04:43:20.431727 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.431741 | orchestrator | 2026-03-19 04:43:20.431755 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-19 04:43:20.431770 | orchestrator | Thursday 19 March 2026 04:43:10 +0000 (0:00:00.996) 0:07:03.960 ******** 2026-03-19 04:43:20.431785 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:20.431800 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:43:20.431814 | orchestrator | 2026-03-19 04:43:20.431829 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-19 04:43:20.431844 | orchestrator | Thursday 19 March 2026 04:43:14 +0000 (0:00:03.711) 0:07:07.672 ******** 2026-03-19 04:43:20.431858 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:43:20.431872 | orchestrator | 2026-03-19 04:43:20.431887 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:43:20.431902 | orchestrator | Thursday 19 March 2026 04:43:15 +0000 (0:00:01.509) 0:07:09.182 ******** 2026-03-19 04:43:20.431917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-19 04:43:20.431933 | orchestrator | 2026-03-19 04:43:20.431943 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:43:20.431952 | orchestrator | Thursday 19 March 2026 04:43:16 +0000 (0:00:00.211) 0:07:09.393 ******** 2026-03-19 04:43:20.431961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-19 04:43:20.431969 | orchestrator | 2026-03-19 04:43:20.431978 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:43:20.431986 | orchestrator | Thursday 19 March 2026 04:43:16 +0000 (0:00:00.449) 0:07:09.843 ******** 2026-03-19 04:43:20.432027 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:20.432038 | orchestrator | 2026-03-19 04:43:20.432047 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:43:20.432056 | orchestrator | Thursday 19 March 2026 04:43:17 +0000 (0:00:00.566) 0:07:10.409 ******** 2026-03-19 04:43:20.432069 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432084 | orchestrator | 2026-03-19 04:43:20.432098 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:43:20.432113 | orchestrator | Thursday 19 March 2026 04:43:17 +0000 (0:00:00.113) 0:07:10.523 ******** 2026-03-19 04:43:20.432127 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432141 | orchestrator | 2026-03-19 04:43:20.432156 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:43:20.432172 | orchestrator | Thursday 19 March 2026 04:43:17 +0000 (0:00:00.129) 0:07:10.652 ******** 2026-03-19 04:43:20.432186 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432201 | orchestrator | 2026-03-19 04:43:20.432217 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:43:20.432231 | orchestrator | Thursday 19 March 2026 04:43:17 +0000 (0:00:00.146) 0:07:10.798 ******** 2026-03-19 04:43:20.432246 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:20.432261 | orchestrator | 2026-03-19 04:43:20.432275 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:43:20.432292 | orchestrator | Thursday 19 March 2026 04:43:18 +0000 (0:00:00.543) 0:07:11.342 ******** 2026-03-19 04:43:20.432308 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432334 | orchestrator | 2026-03-19 04:43:20.432343 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:43:20.432352 | orchestrator | Thursday 19 March 2026 04:43:18 +0000 (0:00:00.141) 0:07:11.484 ******** 2026-03-19 04:43:20.432360 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432369 | orchestrator | 2026-03-19 04:43:20.432377 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:43:20.432386 | orchestrator | Thursday 19 March 2026 04:43:18 +0000 (0:00:00.131) 0:07:11.615 ******** 2026-03-19 04:43:20.432394 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:20.432403 | orchestrator | 2026-03-19 04:43:20.432411 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:43:20.432420 | orchestrator | Thursday 19 March 2026 04:43:18 +0000 (0:00:00.550) 0:07:12.166 ******** 2026-03-19 04:43:20.432428 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:20.432436 | orchestrator | 2026-03-19 04:43:20.432445 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:43:20.432454 | orchestrator | Thursday 19 March 2026 04:43:19 +0000 (0:00:00.570) 0:07:12.737 ******** 2026-03-19 04:43:20.432462 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432470 | orchestrator | 2026-03-19 04:43:20.432479 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:43:20.432487 | orchestrator | Thursday 19 March 2026 04:43:19 +0000 (0:00:00.121) 0:07:12.858 ******** 2026-03-19 04:43:20.432496 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:20.432504 | orchestrator | 2026-03-19 04:43:20.432513 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:43:20.432521 | orchestrator | Thursday 19 March 2026 04:43:19 +0000 (0:00:00.159) 0:07:13.018 ******** 2026-03-19 04:43:20.432529 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432538 | orchestrator | 2026-03-19 04:43:20.432546 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:43:20.432555 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.382) 0:07:13.400 ******** 2026-03-19 04:43:20.432563 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:20.432572 | orchestrator | 2026-03-19 04:43:20.432580 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:43:20.432589 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.136) 0:07:13.537 ******** 2026-03-19 04:43:20.432615 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.041409 | orchestrator | 2026-03-19 04:43:32.041545 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:43:32.041568 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.148) 0:07:13.685 ******** 2026-03-19 04:43:32.041584 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.041601 | orchestrator | 2026-03-19 04:43:32.041617 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:43:32.041633 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.129) 0:07:13.815 ******** 2026-03-19 04:43:32.041648 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.041659 | orchestrator | 2026-03-19 04:43:32.041668 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:43:32.041677 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.135) 0:07:13.951 ******** 2026-03-19 04:43:32.041686 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.041696 | orchestrator | 2026-03-19 04:43:32.041705 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:43:32.041714 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.147) 0:07:14.099 ******** 2026-03-19 04:43:32.041723 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.041731 | orchestrator | 2026-03-19 04:43:32.041740 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:43:32.041754 | orchestrator | Thursday 19 March 2026 04:43:20 +0000 (0:00:00.152) 0:07:14.251 ******** 2026-03-19 04:43:32.041769 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.041811 | orchestrator | 2026-03-19 04:43:32.041827 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:43:32.041841 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.208) 0:07:14.459 ******** 2026-03-19 04:43:32.041855 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.041871 | orchestrator | 2026-03-19 04:43:32.041886 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:43:32.041900 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.141) 0:07:14.600 ******** 2026-03-19 04:43:32.041915 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.041931 | orchestrator | 2026-03-19 04:43:32.041947 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:43:32.041963 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.141) 0:07:14.742 ******** 2026-03-19 04:43:32.041978 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042120 | orchestrator | 2026-03-19 04:43:32.042137 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:43:32.042152 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.139) 0:07:14.882 ******** 2026-03-19 04:43:32.042167 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042181 | orchestrator | 2026-03-19 04:43:32.042195 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:43:32.042210 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.133) 0:07:15.016 ******** 2026-03-19 04:43:32.042225 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042241 | orchestrator | 2026-03-19 04:43:32.042255 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:43:32.042271 | orchestrator | Thursday 19 March 2026 04:43:21 +0000 (0:00:00.116) 0:07:15.132 ******** 2026-03-19 04:43:32.042286 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042301 | orchestrator | 2026-03-19 04:43:32.042316 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:43:32.042330 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.392) 0:07:15.524 ******** 2026-03-19 04:43:32.042344 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042360 | orchestrator | 2026-03-19 04:43:32.042375 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:43:32.042390 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.128) 0:07:15.653 ******** 2026-03-19 04:43:32.042404 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042419 | orchestrator | 2026-03-19 04:43:32.042433 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:43:32.042448 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.131) 0:07:15.784 ******** 2026-03-19 04:43:32.042463 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042479 | orchestrator | 2026-03-19 04:43:32.042494 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:43:32.042508 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.125) 0:07:15.910 ******** 2026-03-19 04:43:32.042522 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042538 | orchestrator | 2026-03-19 04:43:32.042554 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:43:32.042568 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.131) 0:07:16.041 ******** 2026-03-19 04:43:32.042582 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042596 | orchestrator | 2026-03-19 04:43:32.042611 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:43:32.042626 | orchestrator | Thursday 19 March 2026 04:43:22 +0000 (0:00:00.130) 0:07:16.171 ******** 2026-03-19 04:43:32.042641 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042655 | orchestrator | 2026-03-19 04:43:32.042670 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:43:32.042683 | orchestrator | Thursday 19 March 2026 04:43:23 +0000 (0:00:00.196) 0:07:16.367 ******** 2026-03-19 04:43:32.042712 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.042727 | orchestrator | 2026-03-19 04:43:32.042741 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:43:32.042756 | orchestrator | Thursday 19 March 2026 04:43:24 +0000 (0:00:00.932) 0:07:17.300 ******** 2026-03-19 04:43:32.042770 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.042785 | orchestrator | 2026-03-19 04:43:32.042801 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:43:32.042815 | orchestrator | Thursday 19 March 2026 04:43:25 +0000 (0:00:01.453) 0:07:18.753 ******** 2026-03-19 04:43:32.042845 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-19 04:43:32.042863 | orchestrator | 2026-03-19 04:43:32.042903 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:43:32.042919 | orchestrator | Thursday 19 March 2026 04:43:25 +0000 (0:00:00.193) 0:07:18.947 ******** 2026-03-19 04:43:32.042934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.042949 | orchestrator | 2026-03-19 04:43:32.042965 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:43:32.042979 | orchestrator | Thursday 19 March 2026 04:43:25 +0000 (0:00:00.129) 0:07:19.077 ******** 2026-03-19 04:43:32.043019 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043034 | orchestrator | 2026-03-19 04:43:32.043049 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:43:32.043063 | orchestrator | Thursday 19 March 2026 04:43:26 +0000 (0:00:00.368) 0:07:19.445 ******** 2026-03-19 04:43:32.043078 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:43:32.043093 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:43:32.043107 | orchestrator | 2026-03-19 04:43:32.043122 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:43:32.043134 | orchestrator | Thursday 19 March 2026 04:43:27 +0000 (0:00:00.866) 0:07:20.312 ******** 2026-03-19 04:43:32.043143 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.043151 | orchestrator | 2026-03-19 04:43:32.043160 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:43:32.043169 | orchestrator | Thursday 19 March 2026 04:43:27 +0000 (0:00:00.474) 0:07:20.787 ******** 2026-03-19 04:43:32.043177 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043186 | orchestrator | 2026-03-19 04:43:32.043195 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:43:32.043203 | orchestrator | Thursday 19 March 2026 04:43:27 +0000 (0:00:00.147) 0:07:20.934 ******** 2026-03-19 04:43:32.043212 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043221 | orchestrator | 2026-03-19 04:43:32.043230 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:43:32.043238 | orchestrator | Thursday 19 March 2026 04:43:27 +0000 (0:00:00.137) 0:07:21.072 ******** 2026-03-19 04:43:32.043247 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043256 | orchestrator | 2026-03-19 04:43:32.043264 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:43:32.043273 | orchestrator | Thursday 19 March 2026 04:43:27 +0000 (0:00:00.136) 0:07:21.209 ******** 2026-03-19 04:43:32.043281 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-19 04:43:32.043290 | orchestrator | 2026-03-19 04:43:32.043298 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:43:32.043307 | orchestrator | Thursday 19 March 2026 04:43:28 +0000 (0:00:00.229) 0:07:21.439 ******** 2026-03-19 04:43:32.043316 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.043324 | orchestrator | 2026-03-19 04:43:32.043333 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:43:32.043341 | orchestrator | Thursday 19 March 2026 04:43:28 +0000 (0:00:00.799) 0:07:22.238 ******** 2026-03-19 04:43:32.043360 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:43:32.043369 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:43:32.043378 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:43:32.043386 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043395 | orchestrator | 2026-03-19 04:43:32.043404 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:43:32.043412 | orchestrator | Thursday 19 March 2026 04:43:29 +0000 (0:00:00.150) 0:07:22.388 ******** 2026-03-19 04:43:32.043421 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043429 | orchestrator | 2026-03-19 04:43:32.043438 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:43:32.043447 | orchestrator | Thursday 19 March 2026 04:43:29 +0000 (0:00:00.122) 0:07:22.511 ******** 2026-03-19 04:43:32.043455 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043464 | orchestrator | 2026-03-19 04:43:32.043472 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:43:32.043481 | orchestrator | Thursday 19 March 2026 04:43:29 +0000 (0:00:00.154) 0:07:22.666 ******** 2026-03-19 04:43:32.043495 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043507 | orchestrator | 2026-03-19 04:43:32.043516 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:43:32.043524 | orchestrator | Thursday 19 March 2026 04:43:29 +0000 (0:00:00.139) 0:07:22.805 ******** 2026-03-19 04:43:32.043533 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043542 | orchestrator | 2026-03-19 04:43:32.043550 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:43:32.043559 | orchestrator | Thursday 19 March 2026 04:43:29 +0000 (0:00:00.388) 0:07:23.194 ******** 2026-03-19 04:43:32.043568 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:32.043577 | orchestrator | 2026-03-19 04:43:32.043585 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:43:32.043594 | orchestrator | Thursday 19 March 2026 04:43:30 +0000 (0:00:00.153) 0:07:23.347 ******** 2026-03-19 04:43:32.043603 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.043612 | orchestrator | 2026-03-19 04:43:32.043620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:43:32.043629 | orchestrator | Thursday 19 March 2026 04:43:31 +0000 (0:00:01.565) 0:07:24.913 ******** 2026-03-19 04:43:32.043637 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:32.043646 | orchestrator | 2026-03-19 04:43:32.043655 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:43:32.043663 | orchestrator | Thursday 19 March 2026 04:43:31 +0000 (0:00:00.164) 0:07:25.077 ******** 2026-03-19 04:43:32.043677 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-19 04:43:32.043686 | orchestrator | 2026-03-19 04:43:32.043702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:43:44.192634 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.217) 0:07:25.294 ******** 2026-03-19 04:43:44.192715 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192722 | orchestrator | 2026-03-19 04:43:44.192728 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:43:44.192733 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.142) 0:07:25.436 ******** 2026-03-19 04:43:44.192738 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192743 | orchestrator | 2026-03-19 04:43:44.192747 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:43:44.192752 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.151) 0:07:25.588 ******** 2026-03-19 04:43:44.192756 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192760 | orchestrator | 2026-03-19 04:43:44.192765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:43:44.192769 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.141) 0:07:25.729 ******** 2026-03-19 04:43:44.192788 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192793 | orchestrator | 2026-03-19 04:43:44.192797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:43:44.192802 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.144) 0:07:25.874 ******** 2026-03-19 04:43:44.192806 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192810 | orchestrator | 2026-03-19 04:43:44.192814 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:43:44.192819 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.154) 0:07:26.029 ******** 2026-03-19 04:43:44.192823 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192827 | orchestrator | 2026-03-19 04:43:44.192832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:43:44.192836 | orchestrator | Thursday 19 March 2026 04:43:32 +0000 (0:00:00.148) 0:07:26.178 ******** 2026-03-19 04:43:44.192840 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192844 | orchestrator | 2026-03-19 04:43:44.192848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:43:44.192853 | orchestrator | Thursday 19 March 2026 04:43:33 +0000 (0:00:00.149) 0:07:26.327 ******** 2026-03-19 04:43:44.192857 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.192861 | orchestrator | 2026-03-19 04:43:44.192865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:43:44.192869 | orchestrator | Thursday 19 March 2026 04:43:33 +0000 (0:00:00.386) 0:07:26.713 ******** 2026-03-19 04:43:44.192873 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:43:44.192879 | orchestrator | 2026-03-19 04:43:44.192883 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:43:44.192887 | orchestrator | Thursday 19 March 2026 04:43:33 +0000 (0:00:00.218) 0:07:26.931 ******** 2026-03-19 04:43:44.192891 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-19 04:43:44.192896 | orchestrator | 2026-03-19 04:43:44.192900 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:43:44.192904 | orchestrator | Thursday 19 March 2026 04:43:33 +0000 (0:00:00.202) 0:07:27.134 ******** 2026-03-19 04:43:44.192908 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-19 04:43:44.192913 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-19 04:43:44.192918 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-19 04:43:44.192922 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-19 04:43:44.192926 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-19 04:43:44.192930 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-19 04:43:44.192934 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-19 04:43:44.192938 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:43:44.192943 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:43:44.192947 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:43:44.192951 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:43:44.192955 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:43:44.192959 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:43:44.192964 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:43:44.192968 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-19 04:43:44.193029 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-19 04:43:44.193033 | orchestrator | 2026-03-19 04:43:44.193037 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:43:44.193042 | orchestrator | Thursday 19 March 2026 04:43:39 +0000 (0:00:05.729) 0:07:32.863 ******** 2026-03-19 04:43:44.193052 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193056 | orchestrator | 2026-03-19 04:43:44.193060 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:43:44.193064 | orchestrator | Thursday 19 March 2026 04:43:39 +0000 (0:00:00.128) 0:07:32.991 ******** 2026-03-19 04:43:44.193068 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193072 | orchestrator | 2026-03-19 04:43:44.193076 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:43:44.193081 | orchestrator | Thursday 19 March 2026 04:43:39 +0000 (0:00:00.136) 0:07:33.128 ******** 2026-03-19 04:43:44.193085 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193089 | orchestrator | 2026-03-19 04:43:44.193093 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:43:44.193097 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.178) 0:07:33.306 ******** 2026-03-19 04:43:44.193101 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193106 | orchestrator | 2026-03-19 04:43:44.193120 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:43:44.193135 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.137) 0:07:33.443 ******** 2026-03-19 04:43:44.193139 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193143 | orchestrator | 2026-03-19 04:43:44.193147 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:43:44.193152 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.140) 0:07:33.584 ******** 2026-03-19 04:43:44.193156 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193160 | orchestrator | 2026-03-19 04:43:44.193164 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:43:44.193168 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.118) 0:07:33.703 ******** 2026-03-19 04:43:44.193173 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193177 | orchestrator | 2026-03-19 04:43:44.193181 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:43:44.193185 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.132) 0:07:33.836 ******** 2026-03-19 04:43:44.193189 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193193 | orchestrator | 2026-03-19 04:43:44.193198 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:43:44.193202 | orchestrator | Thursday 19 March 2026 04:43:40 +0000 (0:00:00.133) 0:07:33.969 ******** 2026-03-19 04:43:44.193207 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193211 | orchestrator | 2026-03-19 04:43:44.193216 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:43:44.193221 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.378) 0:07:34.348 ******** 2026-03-19 04:43:44.193226 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193231 | orchestrator | 2026-03-19 04:43:44.193235 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:43:44.193240 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.131) 0:07:34.479 ******** 2026-03-19 04:43:44.193245 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193252 | orchestrator | 2026-03-19 04:43:44.193259 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:43:44.193266 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.133) 0:07:34.612 ******** 2026-03-19 04:43:44.193274 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193285 | orchestrator | 2026-03-19 04:43:44.193292 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:43:44.193298 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.131) 0:07:34.744 ******** 2026-03-19 04:43:44.193305 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193312 | orchestrator | 2026-03-19 04:43:44.193319 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:43:44.193332 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.212) 0:07:34.956 ******** 2026-03-19 04:43:44.193339 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193346 | orchestrator | 2026-03-19 04:43:44.193353 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:43:44.193360 | orchestrator | Thursday 19 March 2026 04:43:41 +0000 (0:00:00.143) 0:07:35.100 ******** 2026-03-19 04:43:44.193367 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193384 | orchestrator | 2026-03-19 04:43:44.193397 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:43:44.193405 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.229) 0:07:35.329 ******** 2026-03-19 04:43:44.193411 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193419 | orchestrator | 2026-03-19 04:43:44.193426 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:43:44.193433 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.129) 0:07:35.459 ******** 2026-03-19 04:43:44.193440 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193448 | orchestrator | 2026-03-19 04:43:44.193455 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:43:44.193461 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.126) 0:07:35.586 ******** 2026-03-19 04:43:44.193465 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193470 | orchestrator | 2026-03-19 04:43:44.193475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:43:44.193479 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.139) 0:07:35.726 ******** 2026-03-19 04:43:44.193484 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193489 | orchestrator | 2026-03-19 04:43:44.193493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:43:44.193498 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.129) 0:07:35.856 ******** 2026-03-19 04:43:44.193503 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193507 | orchestrator | 2026-03-19 04:43:44.193512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:43:44.193516 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.121) 0:07:35.977 ******** 2026-03-19 04:43:44.193521 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193526 | orchestrator | 2026-03-19 04:43:44.193530 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:43:44.193535 | orchestrator | Thursday 19 March 2026 04:43:42 +0000 (0:00:00.142) 0:07:36.119 ******** 2026-03-19 04:43:44.193540 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:43:44.193544 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:43:44.193549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:43:44.193554 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:43:44.193558 | orchestrator | 2026-03-19 04:43:44.193563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:43:44.193572 | orchestrator | Thursday 19 March 2026 04:43:43 +0000 (0:00:00.903) 0:07:37.023 ******** 2026-03-19 04:43:44.193577 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:43:44.193586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:44:39.985452 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:44:39.985584 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.985601 | orchestrator | 2026-03-19 04:44:39.985614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:44:39.985626 | orchestrator | Thursday 19 March 2026 04:43:44 +0000 (0:00:00.419) 0:07:37.442 ******** 2026-03-19 04:44:39.985638 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:44:39.985649 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:44:39.985682 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:44:39.985694 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.985705 | orchestrator | 2026-03-19 04:44:39.985716 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:44:39.985727 | orchestrator | Thursday 19 March 2026 04:43:44 +0000 (0:00:00.434) 0:07:37.877 ******** 2026-03-19 04:44:39.985737 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.985748 | orchestrator | 2026-03-19 04:44:39.985759 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:44:39.985770 | orchestrator | Thursday 19 March 2026 04:43:44 +0000 (0:00:00.135) 0:07:38.012 ******** 2026-03-19 04:44:39.985781 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-19 04:44:39.985791 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.985802 | orchestrator | 2026-03-19 04:44:39.985813 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:44:39.985824 | orchestrator | Thursday 19 March 2026 04:43:45 +0000 (0:00:00.331) 0:07:38.344 ******** 2026-03-19 04:44:39.985836 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.985848 | orchestrator | 2026-03-19 04:44:39.985858 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:44:39.985869 | orchestrator | Thursday 19 March 2026 04:43:45 +0000 (0:00:00.827) 0:07:39.172 ******** 2026-03-19 04:44:39.985880 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.985891 | orchestrator | 2026-03-19 04:44:39.985901 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-19 04:44:39.985994 | orchestrator | Thursday 19 March 2026 04:43:46 +0000 (0:00:00.154) 0:07:39.326 ******** 2026-03-19 04:44:39.986011 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-19 04:44:39.986086 | orchestrator | 2026-03-19 04:44:39.986100 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-19 04:44:39.986112 | orchestrator | Thursday 19 March 2026 04:43:46 +0000 (0:00:00.232) 0:07:39.558 ******** 2026-03-19 04:44:39.986125 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986136 | orchestrator | 2026-03-19 04:44:39.986148 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-19 04:44:39.986160 | orchestrator | Thursday 19 March 2026 04:43:48 +0000 (0:00:02.203) 0:07:41.761 ******** 2026-03-19 04:44:39.986172 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.986184 | orchestrator | 2026-03-19 04:44:39.986197 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-19 04:44:39.986209 | orchestrator | Thursday 19 March 2026 04:43:48 +0000 (0:00:00.165) 0:07:41.927 ******** 2026-03-19 04:44:39.986221 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986233 | orchestrator | 2026-03-19 04:44:39.986246 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-19 04:44:39.986257 | orchestrator | Thursday 19 March 2026 04:43:49 +0000 (0:00:00.393) 0:07:42.320 ******** 2026-03-19 04:44:39.986270 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986282 | orchestrator | 2026-03-19 04:44:39.986294 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-19 04:44:39.986306 | orchestrator | Thursday 19 March 2026 04:43:49 +0000 (0:00:00.162) 0:07:42.483 ******** 2026-03-19 04:44:39.986318 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:44:39.986330 | orchestrator | 2026-03-19 04:44:39.986344 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-19 04:44:39.986356 | orchestrator | Thursday 19 March 2026 04:43:50 +0000 (0:00:01.104) 0:07:43.587 ******** 2026-03-19 04:44:39.986368 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986381 | orchestrator | 2026-03-19 04:44:39.986393 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-19 04:44:39.986404 | orchestrator | Thursday 19 March 2026 04:43:50 +0000 (0:00:00.571) 0:07:44.159 ******** 2026-03-19 04:44:39.986415 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986426 | orchestrator | 2026-03-19 04:44:39.986436 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-19 04:44:39.986458 | orchestrator | Thursday 19 March 2026 04:43:51 +0000 (0:00:00.543) 0:07:44.702 ******** 2026-03-19 04:44:39.986469 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986479 | orchestrator | 2026-03-19 04:44:39.986490 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-19 04:44:39.986501 | orchestrator | Thursday 19 March 2026 04:43:51 +0000 (0:00:00.461) 0:07:45.164 ******** 2026-03-19 04:44:39.986512 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:44:39.986522 | orchestrator | 2026-03-19 04:44:39.986533 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-19 04:44:39.986544 | orchestrator | Thursday 19 March 2026 04:43:52 +0000 (0:00:00.635) 0:07:45.799 ******** 2026-03-19 04:44:39.986554 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:44:39.986565 | orchestrator | 2026-03-19 04:44:39.986576 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-19 04:44:39.986586 | orchestrator | Thursday 19 March 2026 04:43:53 +0000 (0:00:00.566) 0:07:46.365 ******** 2026-03-19 04:44:39.986597 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:44:39.986608 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-19 04:44:39.986635 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-19 04:44:39.986647 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-19 04:44:39.986657 | orchestrator | 2026-03-19 04:44:39.986686 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-19 04:44:39.986698 | orchestrator | Thursday 19 March 2026 04:43:56 +0000 (0:00:02.970) 0:07:49.336 ******** 2026-03-19 04:44:39.986709 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:44:39.986719 | orchestrator | 2026-03-19 04:44:39.986730 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-19 04:44:39.986741 | orchestrator | Thursday 19 March 2026 04:43:57 +0000 (0:00:01.044) 0:07:50.380 ******** 2026-03-19 04:44:39.986752 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986763 | orchestrator | 2026-03-19 04:44:39.986774 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-19 04:44:39.986785 | orchestrator | Thursday 19 March 2026 04:43:57 +0000 (0:00:00.141) 0:07:50.522 ******** 2026-03-19 04:44:39.986796 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986806 | orchestrator | 2026-03-19 04:44:39.986817 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-19 04:44:39.986828 | orchestrator | Thursday 19 March 2026 04:43:57 +0000 (0:00:00.136) 0:07:50.658 ******** 2026-03-19 04:44:39.986839 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986850 | orchestrator | 2026-03-19 04:44:39.986860 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-19 04:44:39.986871 | orchestrator | Thursday 19 March 2026 04:43:58 +0000 (0:00:01.046) 0:07:51.705 ******** 2026-03-19 04:44:39.986882 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.986893 | orchestrator | 2026-03-19 04:44:39.986903 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-19 04:44:39.986938 | orchestrator | Thursday 19 March 2026 04:43:59 +0000 (0:00:00.741) 0:07:52.446 ******** 2026-03-19 04:44:39.986949 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.986960 | orchestrator | 2026-03-19 04:44:39.986971 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-19 04:44:39.986981 | orchestrator | Thursday 19 March 2026 04:43:59 +0000 (0:00:00.125) 0:07:52.572 ******** 2026-03-19 04:44:39.986992 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-19 04:44:39.987002 | orchestrator | 2026-03-19 04:44:39.987013 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-19 04:44:39.987024 | orchestrator | Thursday 19 March 2026 04:43:59 +0000 (0:00:00.240) 0:07:52.813 ******** 2026-03-19 04:44:39.987035 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.987053 | orchestrator | 2026-03-19 04:44:39.987064 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-19 04:44:39.987074 | orchestrator | Thursday 19 March 2026 04:43:59 +0000 (0:00:00.121) 0:07:52.934 ******** 2026-03-19 04:44:39.987085 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.987096 | orchestrator | 2026-03-19 04:44:39.987106 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-19 04:44:39.987117 | orchestrator | Thursday 19 March 2026 04:43:59 +0000 (0:00:00.129) 0:07:53.064 ******** 2026-03-19 04:44:39.987128 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-19 04:44:39.987139 | orchestrator | 2026-03-19 04:44:39.987149 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-19 04:44:39.987160 | orchestrator | Thursday 19 March 2026 04:44:00 +0000 (0:00:00.218) 0:07:53.283 ******** 2026-03-19 04:44:39.987171 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.987181 | orchestrator | 2026-03-19 04:44:39.987192 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-19 04:44:39.987203 | orchestrator | Thursday 19 March 2026 04:44:01 +0000 (0:00:01.338) 0:07:54.621 ******** 2026-03-19 04:44:39.987214 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.987224 | orchestrator | 2026-03-19 04:44:39.987235 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-19 04:44:39.987246 | orchestrator | Thursday 19 March 2026 04:44:02 +0000 (0:00:00.962) 0:07:55.584 ******** 2026-03-19 04:44:39.987257 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.987267 | orchestrator | 2026-03-19 04:44:39.987278 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-19 04:44:39.987289 | orchestrator | Thursday 19 March 2026 04:44:03 +0000 (0:00:01.432) 0:07:57.017 ******** 2026-03-19 04:44:39.987299 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:44:39.987310 | orchestrator | 2026-03-19 04:44:39.987321 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-19 04:44:39.987332 | orchestrator | Thursday 19 March 2026 04:44:06 +0000 (0:00:02.264) 0:07:59.281 ******** 2026-03-19 04:44:39.987342 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-19 04:44:39.987353 | orchestrator | 2026-03-19 04:44:39.987364 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-19 04:44:39.987375 | orchestrator | Thursday 19 March 2026 04:44:06 +0000 (0:00:00.445) 0:07:59.726 ******** 2026-03-19 04:44:39.987385 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-19 04:44:39.987396 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.987407 | orchestrator | 2026-03-19 04:44:39.987418 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-19 04:44:39.987429 | orchestrator | Thursday 19 March 2026 04:44:28 +0000 (0:00:21.924) 0:08:21.651 ******** 2026-03-19 04:44:39.987440 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:39.987450 | orchestrator | 2026-03-19 04:44:39.987461 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-19 04:44:39.987472 | orchestrator | Thursday 19 March 2026 04:44:30 +0000 (0:00:01.974) 0:08:23.625 ******** 2026-03-19 04:44:39.987482 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:39.987493 | orchestrator | 2026-03-19 04:44:39.987538 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-19 04:44:39.987550 | orchestrator | Thursday 19 March 2026 04:44:30 +0000 (0:00:00.131) 0:08:23.756 ******** 2026-03-19 04:44:39.987578 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:44:49.860242 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-19 04:44:49.860366 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-19 04:44:49.860386 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-19 04:44:49.860405 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-19 04:44:49.860419 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f9f095cace0a26cc6d82176370732ca8f81a5a76'}])  2026-03-19 04:44:49.860433 | orchestrator | 2026-03-19 04:44:49.860446 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-19 04:44:49.860459 | orchestrator | Thursday 19 March 2026 04:44:39 +0000 (0:00:09.481) 0:08:33.238 ******** 2026-03-19 04:44:49.860470 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:44:49.860482 | orchestrator | 2026-03-19 04:44:49.860494 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:44:49.860506 | orchestrator | Thursday 19 March 2026 04:44:41 +0000 (0:00:01.520) 0:08:34.759 ******** 2026-03-19 04:44:49.860518 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:44:49.860532 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-19 04:44:49.860543 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-19 04:44:49.860555 | orchestrator | 2026-03-19 04:44:49.860566 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:44:49.860579 | orchestrator | Thursday 19 March 2026 04:44:42 +0000 (0:00:01.113) 0:08:35.873 ******** 2026-03-19 04:44:49.860591 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:44:49.860605 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:44:49.860617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:44:49.860629 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:49.860641 | orchestrator | 2026-03-19 04:44:49.860654 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-19 04:44:49.860666 | orchestrator | Thursday 19 March 2026 04:44:43 +0000 (0:00:00.488) 0:08:36.361 ******** 2026-03-19 04:44:49.860678 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:44:49.860689 | orchestrator | 2026-03-19 04:44:49.860700 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-19 04:44:49.860711 | orchestrator | Thursday 19 March 2026 04:44:43 +0000 (0:00:00.125) 0:08:36.487 ******** 2026-03-19 04:44:49.860735 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:49.860748 | orchestrator | 2026-03-19 04:44:49.860760 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-19 04:44:49.860772 | orchestrator | 2026-03-19 04:44:49.860785 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-19 04:44:49.860797 | orchestrator | Thursday 19 March 2026 04:44:45 +0000 (0:00:02.179) 0:08:38.666 ******** 2026-03-19 04:44:49.860809 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:44:49.860821 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:44:49.860833 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:44:49.860845 | orchestrator | 2026-03-19 04:44:49.860871 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-19 04:44:49.860885 | orchestrator | 2026-03-19 04:44:49.860897 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:44:49.860989 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.721) 0:08:39.388 ******** 2026-03-19 04:44:49.861003 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861016 | orchestrator | 2026-03-19 04:44:49.861029 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:44:49.861065 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.221) 0:08:39.609 ******** 2026-03-19 04:44:49.861078 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861091 | orchestrator | 2026-03-19 04:44:49.861102 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:44:49.861115 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.204) 0:08:39.814 ******** 2026-03-19 04:44:49.861127 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861138 | orchestrator | 2026-03-19 04:44:49.861150 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:44:49.861162 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.157) 0:08:39.971 ******** 2026-03-19 04:44:49.861174 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861186 | orchestrator | 2026-03-19 04:44:49.861198 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:44:49.861210 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.144) 0:08:40.116 ******** 2026-03-19 04:44:49.861222 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861233 | orchestrator | 2026-03-19 04:44:49.861241 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:44:49.861248 | orchestrator | Thursday 19 March 2026 04:44:46 +0000 (0:00:00.131) 0:08:40.247 ******** 2026-03-19 04:44:49.861255 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861262 | orchestrator | 2026-03-19 04:44:49.861269 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:44:49.861276 | orchestrator | Thursday 19 March 2026 04:44:47 +0000 (0:00:00.135) 0:08:40.383 ******** 2026-03-19 04:44:49.861283 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861290 | orchestrator | 2026-03-19 04:44:49.861296 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:44:49.861304 | orchestrator | Thursday 19 March 2026 04:44:47 +0000 (0:00:00.123) 0:08:40.506 ******** 2026-03-19 04:44:49.861311 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861318 | orchestrator | 2026-03-19 04:44:49.861325 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:44:49.861332 | orchestrator | Thursday 19 March 2026 04:44:47 +0000 (0:00:00.373) 0:08:40.880 ******** 2026-03-19 04:44:49.861339 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861346 | orchestrator | 2026-03-19 04:44:49.861353 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:44:49.861360 | orchestrator | Thursday 19 March 2026 04:44:47 +0000 (0:00:00.145) 0:08:41.025 ******** 2026-03-19 04:44:49.861367 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861374 | orchestrator | 2026-03-19 04:44:49.861381 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:44:49.861398 | orchestrator | Thursday 19 March 2026 04:44:47 +0000 (0:00:00.134) 0:08:41.159 ******** 2026-03-19 04:44:49.861406 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861432 | orchestrator | 2026-03-19 04:44:49.861440 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:44:49.861447 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.132) 0:08:41.292 ******** 2026-03-19 04:44:49.861455 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861462 | orchestrator | 2026-03-19 04:44:49.861469 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:44:49.861476 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.199) 0:08:41.491 ******** 2026-03-19 04:44:49.861483 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861491 | orchestrator | 2026-03-19 04:44:49.861498 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:44:49.861505 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.156) 0:08:41.648 ******** 2026-03-19 04:44:49.861512 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861519 | orchestrator | 2026-03-19 04:44:49.861526 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:44:49.861533 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.141) 0:08:41.789 ******** 2026-03-19 04:44:49.861541 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861548 | orchestrator | 2026-03-19 04:44:49.861555 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:44:49.861562 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.140) 0:08:41.930 ******** 2026-03-19 04:44:49.861569 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861576 | orchestrator | 2026-03-19 04:44:49.861584 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:44:49.861591 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.135) 0:08:42.065 ******** 2026-03-19 04:44:49.861598 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861605 | orchestrator | 2026-03-19 04:44:49.861612 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:44:49.861619 | orchestrator | Thursday 19 March 2026 04:44:48 +0000 (0:00:00.132) 0:08:42.198 ******** 2026-03-19 04:44:49.861626 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861634 | orchestrator | 2026-03-19 04:44:49.861641 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:44:49.861648 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.143) 0:08:42.341 ******** 2026-03-19 04:44:49.861655 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861662 | orchestrator | 2026-03-19 04:44:49.861669 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:44:49.861677 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.131) 0:08:42.473 ******** 2026-03-19 04:44:49.861684 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861695 | orchestrator | 2026-03-19 04:44:49.861716 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:44:49.861730 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.380) 0:08:42.854 ******** 2026-03-19 04:44:49.861742 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:49.861754 | orchestrator | 2026-03-19 04:44:49.861766 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:44:49.861778 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.133) 0:08:42.988 ******** 2026-03-19 04:44:49.861800 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600021 | orchestrator | 2026-03-19 04:44:57.600113 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:44:57.600124 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.131) 0:08:43.119 ******** 2026-03-19 04:44:57.600132 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600139 | orchestrator | 2026-03-19 04:44:57.600155 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:44:57.600185 | orchestrator | Thursday 19 March 2026 04:44:49 +0000 (0:00:00.135) 0:08:43.254 ******** 2026-03-19 04:44:57.600197 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600206 | orchestrator | 2026-03-19 04:44:57.600216 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:44:57.600225 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.219) 0:08:43.474 ******** 2026-03-19 04:44:57.600234 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600243 | orchestrator | 2026-03-19 04:44:57.600260 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:44:57.600273 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.127) 0:08:43.602 ******** 2026-03-19 04:44:57.600283 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600293 | orchestrator | 2026-03-19 04:44:57.600304 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:44:57.600316 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.146) 0:08:43.748 ******** 2026-03-19 04:44:57.600327 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600339 | orchestrator | 2026-03-19 04:44:57.600351 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:44:57.600358 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.143) 0:08:43.891 ******** 2026-03-19 04:44:57.600365 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600371 | orchestrator | 2026-03-19 04:44:57.600378 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:44:57.600385 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.137) 0:08:44.029 ******** 2026-03-19 04:44:57.600391 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600398 | orchestrator | 2026-03-19 04:44:57.600404 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:44:57.600411 | orchestrator | Thursday 19 March 2026 04:44:50 +0000 (0:00:00.137) 0:08:44.166 ******** 2026-03-19 04:44:57.600418 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600424 | orchestrator | 2026-03-19 04:44:57.600431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:44:57.600438 | orchestrator | Thursday 19 March 2026 04:44:51 +0000 (0:00:00.127) 0:08:44.294 ******** 2026-03-19 04:44:57.600445 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600451 | orchestrator | 2026-03-19 04:44:57.600458 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:44:57.600465 | orchestrator | Thursday 19 March 2026 04:44:51 +0000 (0:00:00.131) 0:08:44.425 ******** 2026-03-19 04:44:57.600471 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600486 | orchestrator | 2026-03-19 04:44:57.600493 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:44:57.600500 | orchestrator | Thursday 19 March 2026 04:44:51 +0000 (0:00:00.475) 0:08:44.901 ******** 2026-03-19 04:44:57.600506 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600513 | orchestrator | 2026-03-19 04:44:57.600520 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:44:57.600527 | orchestrator | Thursday 19 March 2026 04:44:51 +0000 (0:00:00.139) 0:08:45.040 ******** 2026-03-19 04:44:57.600533 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600542 | orchestrator | 2026-03-19 04:44:57.600549 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:44:57.600557 | orchestrator | Thursday 19 March 2026 04:44:51 +0000 (0:00:00.136) 0:08:45.177 ******** 2026-03-19 04:44:57.600565 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600572 | orchestrator | 2026-03-19 04:44:57.600580 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:44:57.600587 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.160) 0:08:45.338 ******** 2026-03-19 04:44:57.600595 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600603 | orchestrator | 2026-03-19 04:44:57.600624 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:44:57.600636 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.137) 0:08:45.475 ******** 2026-03-19 04:44:57.600648 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600659 | orchestrator | 2026-03-19 04:44:57.600669 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:44:57.600682 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.140) 0:08:45.616 ******** 2026-03-19 04:44:57.600694 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600707 | orchestrator | 2026-03-19 04:44:57.600719 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:44:57.600730 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.123) 0:08:45.739 ******** 2026-03-19 04:44:57.600738 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600746 | orchestrator | 2026-03-19 04:44:57.600754 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:44:57.600763 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.137) 0:08:45.876 ******** 2026-03-19 04:44:57.600770 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600778 | orchestrator | 2026-03-19 04:44:57.600798 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:44:57.600806 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.150) 0:08:46.027 ******** 2026-03-19 04:44:57.600815 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600822 | orchestrator | 2026-03-19 04:44:57.600830 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:44:57.600838 | orchestrator | Thursday 19 March 2026 04:44:52 +0000 (0:00:00.128) 0:08:46.155 ******** 2026-03-19 04:44:57.600861 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600870 | orchestrator | 2026-03-19 04:44:57.600878 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:44:57.600886 | orchestrator | Thursday 19 March 2026 04:44:53 +0000 (0:00:00.182) 0:08:46.338 ******** 2026-03-19 04:44:57.600914 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600923 | orchestrator | 2026-03-19 04:44:57.600929 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:44:57.600936 | orchestrator | Thursday 19 March 2026 04:44:53 +0000 (0:00:00.150) 0:08:46.488 ******** 2026-03-19 04:44:57.600943 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600949 | orchestrator | 2026-03-19 04:44:57.600956 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:44:57.600963 | orchestrator | Thursday 19 March 2026 04:44:53 +0000 (0:00:00.116) 0:08:46.605 ******** 2026-03-19 04:44:57.600969 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.600976 | orchestrator | 2026-03-19 04:44:57.600983 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:44:57.600989 | orchestrator | Thursday 19 March 2026 04:44:53 +0000 (0:00:00.352) 0:08:46.957 ******** 2026-03-19 04:44:57.600996 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601002 | orchestrator | 2026-03-19 04:44:57.601009 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:44:57.601016 | orchestrator | Thursday 19 March 2026 04:44:53 +0000 (0:00:00.228) 0:08:47.186 ******** 2026-03-19 04:44:57.601022 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601029 | orchestrator | 2026-03-19 04:44:57.601036 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:44:57.601042 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.145) 0:08:47.331 ******** 2026-03-19 04:44:57.601049 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601056 | orchestrator | 2026-03-19 04:44:57.601062 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:44:57.601069 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.231) 0:08:47.562 ******** 2026-03-19 04:44:57.601083 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601089 | orchestrator | 2026-03-19 04:44:57.601096 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:44:57.601103 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.136) 0:08:47.698 ******** 2026-03-19 04:44:57.601109 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601116 | orchestrator | 2026-03-19 04:44:57.601123 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:44:57.601131 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.136) 0:08:47.834 ******** 2026-03-19 04:44:57.601138 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601144 | orchestrator | 2026-03-19 04:44:57.601151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:44:57.601161 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.137) 0:08:47.972 ******** 2026-03-19 04:44:57.601173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601184 | orchestrator | 2026-03-19 04:44:57.601195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:44:57.601207 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.141) 0:08:48.114 ******** 2026-03-19 04:44:57.601219 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601229 | orchestrator | 2026-03-19 04:44:57.601236 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:44:57.601242 | orchestrator | Thursday 19 March 2026 04:44:54 +0000 (0:00:00.139) 0:08:48.253 ******** 2026-03-19 04:44:57.601249 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601256 | orchestrator | 2026-03-19 04:44:57.601262 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:44:57.601269 | orchestrator | Thursday 19 March 2026 04:44:55 +0000 (0:00:00.139) 0:08:48.393 ******** 2026-03-19 04:44:57.601276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:44:57.601283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:44:57.601289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:44:57.601296 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601303 | orchestrator | 2026-03-19 04:44:57.601309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:44:57.601316 | orchestrator | Thursday 19 March 2026 04:44:55 +0000 (0:00:00.405) 0:08:48.798 ******** 2026-03-19 04:44:57.601323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:44:57.601329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:44:57.601336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:44:57.601342 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601349 | orchestrator | 2026-03-19 04:44:57.601356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:44:57.601362 | orchestrator | Thursday 19 March 2026 04:44:56 +0000 (0:00:00.673) 0:08:49.472 ******** 2026-03-19 04:44:57.601369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:44:57.601375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:44:57.601382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:44:57.601389 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601395 | orchestrator | 2026-03-19 04:44:57.601407 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:44:57.601413 | orchestrator | Thursday 19 March 2026 04:44:56 +0000 (0:00:00.663) 0:08:50.136 ******** 2026-03-19 04:44:57.601420 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:44:57.601427 | orchestrator | 2026-03-19 04:44:57.601433 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:44:57.601440 | orchestrator | Thursday 19 March 2026 04:44:57 +0000 (0:00:00.387) 0:08:50.523 ******** 2026-03-19 04:44:57.601453 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-19 04:44:57.601466 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.954689 | orchestrator | 2026-03-19 04:45:04.954831 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:45:04.954860 | orchestrator | Thursday 19 March 2026 04:44:57 +0000 (0:00:00.331) 0:08:50.855 ******** 2026-03-19 04:45:04.954879 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.954960 | orchestrator | 2026-03-19 04:45:04.954981 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:45:04.955000 | orchestrator | Thursday 19 March 2026 04:44:57 +0000 (0:00:00.201) 0:08:51.056 ******** 2026-03-19 04:45:04.955020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:45:04.955039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:45:04.955058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:45:04.955077 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.955094 | orchestrator | 2026-03-19 04:45:04.955112 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:45:04.955133 | orchestrator | Thursday 19 March 2026 04:44:58 +0000 (0:00:00.429) 0:08:51.486 ******** 2026-03-19 04:45:04.955151 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.955170 | orchestrator | 2026-03-19 04:45:04.955190 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:45:04.955205 | orchestrator | Thursday 19 March 2026 04:44:58 +0000 (0:00:00.144) 0:08:51.630 ******** 2026-03-19 04:45:04.955217 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.955230 | orchestrator | 2026-03-19 04:45:04.955243 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:45:04.955257 | orchestrator | Thursday 19 March 2026 04:44:58 +0000 (0:00:00.142) 0:08:51.773 ******** 2026-03-19 04:45:04.955271 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.955283 | orchestrator | 2026-03-19 04:45:04.955296 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:45:04.955309 | orchestrator | Thursday 19 March 2026 04:44:58 +0000 (0:00:00.147) 0:08:51.921 ******** 2026-03-19 04:45:04.955322 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:04.955335 | orchestrator | 2026-03-19 04:45:04.955348 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-19 04:45:04.955361 | orchestrator | 2026-03-19 04:45:04.955374 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:45:04.955387 | orchestrator | Thursday 19 March 2026 04:44:59 +0000 (0:00:00.618) 0:08:52.539 ******** 2026-03-19 04:45:04.955400 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955415 | orchestrator | 2026-03-19 04:45:04.955434 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:45:04.955466 | orchestrator | Thursday 19 March 2026 04:44:59 +0000 (0:00:00.203) 0:08:52.742 ******** 2026-03-19 04:45:04.955485 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955503 | orchestrator | 2026-03-19 04:45:04.955520 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:45:04.955539 | orchestrator | Thursday 19 March 2026 04:44:59 +0000 (0:00:00.454) 0:08:53.197 ******** 2026-03-19 04:45:04.955556 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955573 | orchestrator | 2026-03-19 04:45:04.955592 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:45:04.955612 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.134) 0:08:53.332 ******** 2026-03-19 04:45:04.955630 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955648 | orchestrator | 2026-03-19 04:45:04.955667 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:45:04.955686 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.137) 0:08:53.470 ******** 2026-03-19 04:45:04.955704 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955722 | orchestrator | 2026-03-19 04:45:04.955733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:45:04.955774 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.151) 0:08:53.621 ******** 2026-03-19 04:45:04.955785 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955797 | orchestrator | 2026-03-19 04:45:04.955808 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:45:04.955819 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.136) 0:08:53.758 ******** 2026-03-19 04:45:04.955830 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955840 | orchestrator | 2026-03-19 04:45:04.955851 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:45:04.955862 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.134) 0:08:53.893 ******** 2026-03-19 04:45:04.955872 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955883 | orchestrator | 2026-03-19 04:45:04.955927 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:45:04.955939 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.143) 0:08:54.036 ******** 2026-03-19 04:45:04.955950 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.955960 | orchestrator | 2026-03-19 04:45:04.955971 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:45:04.955982 | orchestrator | Thursday 19 March 2026 04:45:00 +0000 (0:00:00.137) 0:08:54.174 ******** 2026-03-19 04:45:04.955993 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956004 | orchestrator | 2026-03-19 04:45:04.956014 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:45:04.956025 | orchestrator | Thursday 19 March 2026 04:45:01 +0000 (0:00:00.142) 0:08:54.316 ******** 2026-03-19 04:45:04.956036 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956047 | orchestrator | 2026-03-19 04:45:04.956072 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:45:04.956083 | orchestrator | Thursday 19 March 2026 04:45:01 +0000 (0:00:00.125) 0:08:54.442 ******** 2026-03-19 04:45:04.956093 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956104 | orchestrator | 2026-03-19 04:45:04.956115 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:45:04.956125 | orchestrator | Thursday 19 March 2026 04:45:01 +0000 (0:00:00.215) 0:08:54.657 ******** 2026-03-19 04:45:04.956136 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956147 | orchestrator | 2026-03-19 04:45:04.956183 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:45:04.956212 | orchestrator | Thursday 19 March 2026 04:45:01 +0000 (0:00:00.133) 0:08:54.790 ******** 2026-03-19 04:45:04.956232 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956251 | orchestrator | 2026-03-19 04:45:04.956269 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:45:04.956289 | orchestrator | Thursday 19 March 2026 04:45:01 +0000 (0:00:00.409) 0:08:55.200 ******** 2026-03-19 04:45:04.956307 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956326 | orchestrator | 2026-03-19 04:45:04.956346 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:45:04.956364 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.162) 0:08:55.363 ******** 2026-03-19 04:45:04.956383 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956396 | orchestrator | 2026-03-19 04:45:04.956407 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:45:04.956418 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.147) 0:08:55.510 ******** 2026-03-19 04:45:04.956428 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956439 | orchestrator | 2026-03-19 04:45:04.956450 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:45:04.956461 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.133) 0:08:55.644 ******** 2026-03-19 04:45:04.956471 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956482 | orchestrator | 2026-03-19 04:45:04.956505 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:45:04.956516 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.138) 0:08:55.783 ******** 2026-03-19 04:45:04.956527 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956537 | orchestrator | 2026-03-19 04:45:04.956548 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:45:04.956560 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.148) 0:08:55.932 ******** 2026-03-19 04:45:04.956576 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956602 | orchestrator | 2026-03-19 04:45:04.956625 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:45:04.956643 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.138) 0:08:56.071 ******** 2026-03-19 04:45:04.956661 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956678 | orchestrator | 2026-03-19 04:45:04.956697 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:45:04.956716 | orchestrator | Thursday 19 March 2026 04:45:02 +0000 (0:00:00.135) 0:08:56.206 ******** 2026-03-19 04:45:04.956735 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956753 | orchestrator | 2026-03-19 04:45:04.956773 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:45:04.956791 | orchestrator | Thursday 19 March 2026 04:45:03 +0000 (0:00:00.141) 0:08:56.348 ******** 2026-03-19 04:45:04.956810 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956822 | orchestrator | 2026-03-19 04:45:04.956833 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:45:04.956844 | orchestrator | Thursday 19 March 2026 04:45:03 +0000 (0:00:00.128) 0:08:56.476 ******** 2026-03-19 04:45:04.956854 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956865 | orchestrator | 2026-03-19 04:45:04.956876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:45:04.956922 | orchestrator | Thursday 19 March 2026 04:45:03 +0000 (0:00:00.203) 0:08:56.679 ******** 2026-03-19 04:45:04.956936 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956946 | orchestrator | 2026-03-19 04:45:04.956957 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:45:04.956968 | orchestrator | Thursday 19 March 2026 04:45:03 +0000 (0:00:00.136) 0:08:56.816 ******** 2026-03-19 04:45:04.956979 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.956990 | orchestrator | 2026-03-19 04:45:04.957000 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:45:04.957011 | orchestrator | Thursday 19 March 2026 04:45:03 +0000 (0:00:00.356) 0:08:57.172 ******** 2026-03-19 04:45:04.957022 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957033 | orchestrator | 2026-03-19 04:45:04.957044 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:45:04.957055 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.137) 0:08:57.310 ******** 2026-03-19 04:45:04.957065 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957076 | orchestrator | 2026-03-19 04:45:04.957087 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:45:04.957098 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.137) 0:08:57.447 ******** 2026-03-19 04:45:04.957108 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957119 | orchestrator | 2026-03-19 04:45:04.957130 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:45:04.957140 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.132) 0:08:57.580 ******** 2026-03-19 04:45:04.957151 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957162 | orchestrator | 2026-03-19 04:45:04.957173 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:45:04.957184 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.134) 0:08:57.715 ******** 2026-03-19 04:45:04.957203 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957239 | orchestrator | 2026-03-19 04:45:04.957275 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:45:04.957293 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.135) 0:08:57.850 ******** 2026-03-19 04:45:04.957311 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957328 | orchestrator | 2026-03-19 04:45:04.957345 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:45:04.957363 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.208) 0:08:58.059 ******** 2026-03-19 04:45:04.957380 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:04.957395 | orchestrator | 2026-03-19 04:45:04.957428 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:45:13.067518 | orchestrator | Thursday 19 March 2026 04:45:04 +0000 (0:00:00.151) 0:08:58.210 ******** 2026-03-19 04:45:13.067612 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067622 | orchestrator | 2026-03-19 04:45:13.067630 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:45:13.067638 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.145) 0:08:58.356 ******** 2026-03-19 04:45:13.067644 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067652 | orchestrator | 2026-03-19 04:45:13.067659 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:45:13.067666 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.122) 0:08:58.479 ******** 2026-03-19 04:45:13.067672 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067679 | orchestrator | 2026-03-19 04:45:13.067686 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:45:13.067692 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.138) 0:08:58.617 ******** 2026-03-19 04:45:13.067699 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067705 | orchestrator | 2026-03-19 04:45:13.067712 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:45:13.067718 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.130) 0:08:58.748 ******** 2026-03-19 04:45:13.067725 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067732 | orchestrator | 2026-03-19 04:45:13.067737 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:45:13.067744 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.352) 0:08:59.100 ******** 2026-03-19 04:45:13.067751 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067757 | orchestrator | 2026-03-19 04:45:13.067764 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:45:13.067771 | orchestrator | Thursday 19 March 2026 04:45:05 +0000 (0:00:00.141) 0:08:59.241 ******** 2026-03-19 04:45:13.067777 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067783 | orchestrator | 2026-03-19 04:45:13.067790 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:45:13.067797 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.129) 0:08:59.371 ******** 2026-03-19 04:45:13.067803 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067810 | orchestrator | 2026-03-19 04:45:13.067817 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:45:13.067823 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.143) 0:08:59.514 ******** 2026-03-19 04:45:13.067830 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067836 | orchestrator | 2026-03-19 04:45:13.067843 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:45:13.067872 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.128) 0:08:59.642 ******** 2026-03-19 04:45:13.067953 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.067961 | orchestrator | 2026-03-19 04:45:13.067968 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:45:13.067974 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.140) 0:08:59.783 ******** 2026-03-19 04:45:13.068005 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068013 | orchestrator | 2026-03-19 04:45:13.068019 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:45:13.068025 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.133) 0:08:59.917 ******** 2026-03-19 04:45:13.068032 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068038 | orchestrator | 2026-03-19 04:45:13.068044 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:45:13.068050 | orchestrator | Thursday 19 March 2026 04:45:06 +0000 (0:00:00.128) 0:09:00.046 ******** 2026-03-19 04:45:13.068057 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068063 | orchestrator | 2026-03-19 04:45:13.068070 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:45:13.068077 | orchestrator | Thursday 19 March 2026 04:45:07 +0000 (0:00:00.226) 0:09:00.272 ******** 2026-03-19 04:45:13.068083 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068090 | orchestrator | 2026-03-19 04:45:13.068097 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:45:13.068104 | orchestrator | Thursday 19 March 2026 04:45:07 +0000 (0:00:00.141) 0:09:00.413 ******** 2026-03-19 04:45:13.068111 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068118 | orchestrator | 2026-03-19 04:45:13.068125 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:45:13.068131 | orchestrator | Thursday 19 March 2026 04:45:07 +0000 (0:00:00.224) 0:09:00.637 ******** 2026-03-19 04:45:13.068137 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068143 | orchestrator | 2026-03-19 04:45:13.068149 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:45:13.068155 | orchestrator | Thursday 19 March 2026 04:45:07 +0000 (0:00:00.134) 0:09:00.772 ******** 2026-03-19 04:45:13.068160 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068167 | orchestrator | 2026-03-19 04:45:13.068173 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:45:13.068181 | orchestrator | Thursday 19 March 2026 04:45:07 +0000 (0:00:00.123) 0:09:00.895 ******** 2026-03-19 04:45:13.068202 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068208 | orchestrator | 2026-03-19 04:45:13.068214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:45:13.068220 | orchestrator | Thursday 19 March 2026 04:45:08 +0000 (0:00:00.393) 0:09:01.288 ******** 2026-03-19 04:45:13.068226 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068232 | orchestrator | 2026-03-19 04:45:13.068239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:45:13.068245 | orchestrator | Thursday 19 March 2026 04:45:08 +0000 (0:00:00.131) 0:09:01.420 ******** 2026-03-19 04:45:13.068251 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068257 | orchestrator | 2026-03-19 04:45:13.068279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:45:13.068286 | orchestrator | Thursday 19 March 2026 04:45:08 +0000 (0:00:00.137) 0:09:01.558 ******** 2026-03-19 04:45:13.068293 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068300 | orchestrator | 2026-03-19 04:45:13.068305 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:45:13.068311 | orchestrator | Thursday 19 March 2026 04:45:08 +0000 (0:00:00.144) 0:09:01.703 ******** 2026-03-19 04:45:13.068317 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:45:13.068323 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:45:13.068329 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:45:13.068334 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068340 | orchestrator | 2026-03-19 04:45:13.068345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:45:13.068359 | orchestrator | Thursday 19 March 2026 04:45:08 +0000 (0:00:00.440) 0:09:02.143 ******** 2026-03-19 04:45:13.068366 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:45:13.068372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:45:13.068379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:45:13.068384 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068390 | orchestrator | 2026-03-19 04:45:13.068397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:45:13.068403 | orchestrator | Thursday 19 March 2026 04:45:09 +0000 (0:00:00.376) 0:09:02.520 ******** 2026-03-19 04:45:13.068408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:45:13.068414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:45:13.068420 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:45:13.068427 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068433 | orchestrator | 2026-03-19 04:45:13.068439 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:45:13.068446 | orchestrator | Thursday 19 March 2026 04:45:09 +0000 (0:00:00.424) 0:09:02.944 ******** 2026-03-19 04:45:13.068451 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068458 | orchestrator | 2026-03-19 04:45:13.068464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:45:13.068470 | orchestrator | Thursday 19 March 2026 04:45:09 +0000 (0:00:00.134) 0:09:03.079 ******** 2026-03-19 04:45:13.068477 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-19 04:45:13.068483 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068489 | orchestrator | 2026-03-19 04:45:13.068495 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:45:13.068502 | orchestrator | Thursday 19 March 2026 04:45:10 +0000 (0:00:00.339) 0:09:03.418 ******** 2026-03-19 04:45:13.068508 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068514 | orchestrator | 2026-03-19 04:45:13.068520 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:45:13.068527 | orchestrator | Thursday 19 March 2026 04:45:10 +0000 (0:00:00.266) 0:09:03.685 ******** 2026-03-19 04:45:13.068533 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:45:13.068539 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:45:13.068546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:45:13.068553 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068558 | orchestrator | 2026-03-19 04:45:13.068565 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:45:13.068571 | orchestrator | Thursday 19 March 2026 04:45:11 +0000 (0:00:00.686) 0:09:04.372 ******** 2026-03-19 04:45:13.068577 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068583 | orchestrator | 2026-03-19 04:45:13.068590 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:45:13.068595 | orchestrator | Thursday 19 March 2026 04:45:11 +0000 (0:00:00.142) 0:09:04.515 ******** 2026-03-19 04:45:13.068602 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068608 | orchestrator | 2026-03-19 04:45:13.068614 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:45:13.068621 | orchestrator | Thursday 19 March 2026 04:45:11 +0000 (0:00:00.401) 0:09:04.916 ******** 2026-03-19 04:45:13.068627 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068633 | orchestrator | 2026-03-19 04:45:13.068639 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:45:13.068646 | orchestrator | Thursday 19 March 2026 04:45:11 +0000 (0:00:00.134) 0:09:05.051 ******** 2026-03-19 04:45:13.068652 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:45:13.068659 | orchestrator | 2026-03-19 04:45:13.068665 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-19 04:45:13.068676 | orchestrator | 2026-03-19 04:45:13.068683 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:45:13.068689 | orchestrator | Thursday 19 March 2026 04:45:12 +0000 (0:00:00.573) 0:09:05.625 ******** 2026-03-19 04:45:13.068695 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:13.068701 | orchestrator | 2026-03-19 04:45:13.068708 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:45:13.068714 | orchestrator | Thursday 19 March 2026 04:45:12 +0000 (0:00:00.203) 0:09:05.828 ******** 2026-03-19 04:45:13.068725 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:13.068731 | orchestrator | 2026-03-19 04:45:13.068738 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:45:13.068744 | orchestrator | Thursday 19 March 2026 04:45:12 +0000 (0:00:00.211) 0:09:06.040 ******** 2026-03-19 04:45:13.068750 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:13.068756 | orchestrator | 2026-03-19 04:45:13.068763 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:45:13.068769 | orchestrator | Thursday 19 March 2026 04:45:12 +0000 (0:00:00.144) 0:09:06.185 ******** 2026-03-19 04:45:13.068781 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470299 | orchestrator | 2026-03-19 04:45:19.470432 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:45:19.470456 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.142) 0:09:06.327 ******** 2026-03-19 04:45:19.470473 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470490 | orchestrator | 2026-03-19 04:45:19.470505 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:45:19.470520 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.126) 0:09:06.453 ******** 2026-03-19 04:45:19.470535 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470550 | orchestrator | 2026-03-19 04:45:19.470565 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:45:19.470578 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.135) 0:09:06.589 ******** 2026-03-19 04:45:19.470593 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470608 | orchestrator | 2026-03-19 04:45:19.470622 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:45:19.470635 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.129) 0:09:06.718 ******** 2026-03-19 04:45:19.470649 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470664 | orchestrator | 2026-03-19 04:45:19.470679 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:45:19.470695 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.349) 0:09:07.068 ******** 2026-03-19 04:45:19.470710 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470724 | orchestrator | 2026-03-19 04:45:19.470738 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:45:19.470754 | orchestrator | Thursday 19 March 2026 04:45:13 +0000 (0:00:00.142) 0:09:07.210 ******** 2026-03-19 04:45:19.470769 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470784 | orchestrator | 2026-03-19 04:45:19.470800 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:45:19.470815 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.153) 0:09:07.363 ******** 2026-03-19 04:45:19.470830 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470845 | orchestrator | 2026-03-19 04:45:19.470861 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:45:19.470906 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.113) 0:09:07.477 ******** 2026-03-19 04:45:19.470923 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.470938 | orchestrator | 2026-03-19 04:45:19.470954 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:45:19.470970 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.202) 0:09:07.679 ******** 2026-03-19 04:45:19.470985 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471029 | orchestrator | 2026-03-19 04:45:19.471045 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:45:19.471061 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.155) 0:09:07.835 ******** 2026-03-19 04:45:19.471076 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471091 | orchestrator | 2026-03-19 04:45:19.471107 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:45:19.471123 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.141) 0:09:07.976 ******** 2026-03-19 04:45:19.471138 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471154 | orchestrator | 2026-03-19 04:45:19.471168 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:45:19.471184 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.136) 0:09:08.113 ******** 2026-03-19 04:45:19.471199 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471213 | orchestrator | 2026-03-19 04:45:19.471229 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:45:19.471243 | orchestrator | Thursday 19 March 2026 04:45:14 +0000 (0:00:00.133) 0:09:08.247 ******** 2026-03-19 04:45:19.471258 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471274 | orchestrator | 2026-03-19 04:45:19.471288 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:45:19.471302 | orchestrator | Thursday 19 March 2026 04:45:15 +0000 (0:00:00.152) 0:09:08.400 ******** 2026-03-19 04:45:19.471317 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471332 | orchestrator | 2026-03-19 04:45:19.471347 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:45:19.471362 | orchestrator | Thursday 19 March 2026 04:45:15 +0000 (0:00:00.152) 0:09:08.552 ******** 2026-03-19 04:45:19.471376 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471391 | orchestrator | 2026-03-19 04:45:19.471405 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:45:19.471421 | orchestrator | Thursday 19 March 2026 04:45:15 +0000 (0:00:00.137) 0:09:08.689 ******** 2026-03-19 04:45:19.471435 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471450 | orchestrator | 2026-03-19 04:45:19.471465 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:45:19.471480 | orchestrator | Thursday 19 March 2026 04:45:15 +0000 (0:00:00.390) 0:09:09.080 ******** 2026-03-19 04:45:19.471493 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471507 | orchestrator | 2026-03-19 04:45:19.471522 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:45:19.471537 | orchestrator | Thursday 19 March 2026 04:45:15 +0000 (0:00:00.135) 0:09:09.215 ******** 2026-03-19 04:45:19.471552 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471566 | orchestrator | 2026-03-19 04:45:19.471596 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:45:19.471611 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.133) 0:09:09.348 ******** 2026-03-19 04:45:19.471625 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471639 | orchestrator | 2026-03-19 04:45:19.471653 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:45:19.471667 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.141) 0:09:09.490 ******** 2026-03-19 04:45:19.471680 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471694 | orchestrator | 2026-03-19 04:45:19.471735 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:45:19.471752 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.210) 0:09:09.700 ******** 2026-03-19 04:45:19.471765 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471779 | orchestrator | 2026-03-19 04:45:19.471793 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:45:19.471806 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.137) 0:09:09.837 ******** 2026-03-19 04:45:19.471835 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471849 | orchestrator | 2026-03-19 04:45:19.471862 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:45:19.471947 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.132) 0:09:09.970 ******** 2026-03-19 04:45:19.471966 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.471982 | orchestrator | 2026-03-19 04:45:19.471996 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:45:19.472011 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.117) 0:09:10.088 ******** 2026-03-19 04:45:19.472026 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472041 | orchestrator | 2026-03-19 04:45:19.472056 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:45:19.472070 | orchestrator | Thursday 19 March 2026 04:45:16 +0000 (0:00:00.127) 0:09:10.215 ******** 2026-03-19 04:45:19.472086 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472101 | orchestrator | 2026-03-19 04:45:19.472117 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:45:19.472132 | orchestrator | Thursday 19 March 2026 04:45:17 +0000 (0:00:00.147) 0:09:10.363 ******** 2026-03-19 04:45:19.472148 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472163 | orchestrator | 2026-03-19 04:45:19.472178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:45:19.472193 | orchestrator | Thursday 19 March 2026 04:45:17 +0000 (0:00:00.154) 0:09:10.518 ******** 2026-03-19 04:45:19.472207 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472221 | orchestrator | 2026-03-19 04:45:19.472236 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:45:19.472249 | orchestrator | Thursday 19 March 2026 04:45:17 +0000 (0:00:00.128) 0:09:10.646 ******** 2026-03-19 04:45:19.472264 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472277 | orchestrator | 2026-03-19 04:45:19.472291 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:45:19.472307 | orchestrator | Thursday 19 March 2026 04:45:17 +0000 (0:00:00.451) 0:09:11.098 ******** 2026-03-19 04:45:19.472321 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472336 | orchestrator | 2026-03-19 04:45:19.472350 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:45:19.472364 | orchestrator | Thursday 19 March 2026 04:45:17 +0000 (0:00:00.141) 0:09:11.240 ******** 2026-03-19 04:45:19.472377 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472393 | orchestrator | 2026-03-19 04:45:19.472407 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:45:19.472421 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.129) 0:09:11.370 ******** 2026-03-19 04:45:19.472434 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472447 | orchestrator | 2026-03-19 04:45:19.472461 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:45:19.472473 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.140) 0:09:11.511 ******** 2026-03-19 04:45:19.472489 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472503 | orchestrator | 2026-03-19 04:45:19.472518 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:45:19.472533 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.133) 0:09:11.644 ******** 2026-03-19 04:45:19.472549 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472564 | orchestrator | 2026-03-19 04:45:19.472578 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:45:19.472591 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.139) 0:09:11.784 ******** 2026-03-19 04:45:19.472606 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472621 | orchestrator | 2026-03-19 04:45:19.472636 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:45:19.472648 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.128) 0:09:11.912 ******** 2026-03-19 04:45:19.472677 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472691 | orchestrator | 2026-03-19 04:45:19.472707 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:45:19.472721 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.138) 0:09:12.051 ******** 2026-03-19 04:45:19.472736 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472777 | orchestrator | 2026-03-19 04:45:19.472794 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:45:19.472809 | orchestrator | Thursday 19 March 2026 04:45:18 +0000 (0:00:00.140) 0:09:12.192 ******** 2026-03-19 04:45:19.472824 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472838 | orchestrator | 2026-03-19 04:45:19.472853 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:45:19.472867 | orchestrator | Thursday 19 March 2026 04:45:19 +0000 (0:00:00.139) 0:09:12.331 ******** 2026-03-19 04:45:19.472950 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.472965 | orchestrator | 2026-03-19 04:45:19.472993 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:45:19.473008 | orchestrator | Thursday 19 March 2026 04:45:19 +0000 (0:00:00.140) 0:09:12.472 ******** 2026-03-19 04:45:19.473024 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.473037 | orchestrator | 2026-03-19 04:45:19.473052 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:45:19.473067 | orchestrator | Thursday 19 March 2026 04:45:19 +0000 (0:00:00.121) 0:09:12.593 ******** 2026-03-19 04:45:19.473080 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:19.473095 | orchestrator | 2026-03-19 04:45:19.473130 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:45:44.838276 | orchestrator | Thursday 19 March 2026 04:45:19 +0000 (0:00:00.134) 0:09:12.728 ******** 2026-03-19 04:45:44.838379 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838391 | orchestrator | 2026-03-19 04:45:44.838400 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:45:44.838408 | orchestrator | Thursday 19 March 2026 04:45:19 +0000 (0:00:00.366) 0:09:13.094 ******** 2026-03-19 04:45:44.838415 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838423 | orchestrator | 2026-03-19 04:45:44.838431 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:45:44.838438 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.254) 0:09:13.349 ******** 2026-03-19 04:45:44.838445 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838452 | orchestrator | 2026-03-19 04:45:44.838460 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:45:44.838467 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.130) 0:09:13.479 ******** 2026-03-19 04:45:44.838475 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838482 | orchestrator | 2026-03-19 04:45:44.838489 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:45:44.838496 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.237) 0:09:13.716 ******** 2026-03-19 04:45:44.838504 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838511 | orchestrator | 2026-03-19 04:45:44.838518 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:45:44.838526 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.127) 0:09:13.843 ******** 2026-03-19 04:45:44.838533 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838540 | orchestrator | 2026-03-19 04:45:44.838549 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:45:44.838558 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.134) 0:09:13.978 ******** 2026-03-19 04:45:44.838565 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838572 | orchestrator | 2026-03-19 04:45:44.838580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:45:44.838606 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.136) 0:09:14.115 ******** 2026-03-19 04:45:44.838614 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838621 | orchestrator | 2026-03-19 04:45:44.838628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:45:44.838636 | orchestrator | Thursday 19 March 2026 04:45:20 +0000 (0:00:00.141) 0:09:14.256 ******** 2026-03-19 04:45:44.838643 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838650 | orchestrator | 2026-03-19 04:45:44.838657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:45:44.838664 | orchestrator | Thursday 19 March 2026 04:45:21 +0000 (0:00:00.133) 0:09:14.390 ******** 2026-03-19 04:45:44.838672 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838679 | orchestrator | 2026-03-19 04:45:44.838686 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:45:44.838693 | orchestrator | Thursday 19 March 2026 04:45:21 +0000 (0:00:00.132) 0:09:14.522 ******** 2026-03-19 04:45:44.838701 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:45:44.838708 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:45:44.838716 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:45:44.838723 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838730 | orchestrator | 2026-03-19 04:45:44.838737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:45:44.838745 | orchestrator | Thursday 19 March 2026 04:45:21 +0000 (0:00:00.370) 0:09:14.893 ******** 2026-03-19 04:45:44.838752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:45:44.838759 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:45:44.838766 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:45:44.838773 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838780 | orchestrator | 2026-03-19 04:45:44.838788 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:45:44.838795 | orchestrator | Thursday 19 March 2026 04:45:22 +0000 (0:00:00.695) 0:09:15.588 ******** 2026-03-19 04:45:44.838802 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:45:44.838809 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:45:44.838817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:45:44.838824 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838832 | orchestrator | 2026-03-19 04:45:44.838841 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:45:44.838849 | orchestrator | Thursday 19 March 2026 04:45:23 +0000 (0:00:00.675) 0:09:16.264 ******** 2026-03-19 04:45:44.838922 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838931 | orchestrator | 2026-03-19 04:45:44.838939 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:45:44.838948 | orchestrator | Thursday 19 March 2026 04:45:23 +0000 (0:00:00.372) 0:09:16.637 ******** 2026-03-19 04:45:44.838957 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-19 04:45:44.838965 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.838973 | orchestrator | 2026-03-19 04:45:44.838993 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:45:44.839001 | orchestrator | Thursday 19 March 2026 04:45:23 +0000 (0:00:00.309) 0:09:16.946 ******** 2026-03-19 04:45:44.839009 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839016 | orchestrator | 2026-03-19 04:45:44.839023 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:45:44.839030 | orchestrator | Thursday 19 March 2026 04:45:23 +0000 (0:00:00.202) 0:09:17.148 ******** 2026-03-19 04:45:44.839038 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:45:44.839059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:45:44.839074 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:45:44.839081 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839089 | orchestrator | 2026-03-19 04:45:44.839096 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:45:44.839103 | orchestrator | Thursday 19 March 2026 04:45:24 +0000 (0:00:00.399) 0:09:17.547 ******** 2026-03-19 04:45:44.839110 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839117 | orchestrator | 2026-03-19 04:45:44.839125 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:45:44.839132 | orchestrator | Thursday 19 March 2026 04:45:24 +0000 (0:00:00.137) 0:09:17.685 ******** 2026-03-19 04:45:44.839139 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839146 | orchestrator | 2026-03-19 04:45:44.839153 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:45:44.839161 | orchestrator | Thursday 19 March 2026 04:45:24 +0000 (0:00:00.131) 0:09:17.816 ******** 2026-03-19 04:45:44.839168 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839175 | orchestrator | 2026-03-19 04:45:44.839182 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:45:44.839189 | orchestrator | Thursday 19 March 2026 04:45:24 +0000 (0:00:00.144) 0:09:17.961 ******** 2026-03-19 04:45:44.839196 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:45:44.839204 | orchestrator | 2026-03-19 04:45:44.839211 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-19 04:45:44.839218 | orchestrator | 2026-03-19 04:45:44.839225 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:45:44.839232 | orchestrator | Thursday 19 March 2026 04:45:25 +0000 (0:00:00.593) 0:09:18.554 ******** 2026-03-19 04:45:44.839239 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:45:44.839247 | orchestrator | 2026-03-19 04:45:44.839254 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-19 04:45:44.839261 | orchestrator | Thursday 19 March 2026 04:45:37 +0000 (0:00:12.078) 0:09:30.632 ******** 2026-03-19 04:45:44.839268 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:45:44.839275 | orchestrator | 2026-03-19 04:45:44.839283 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:45:44.839290 | orchestrator | Thursday 19 March 2026 04:45:39 +0000 (0:00:01.780) 0:09:32.413 ******** 2026-03-19 04:45:44.839297 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-19 04:45:44.839304 | orchestrator | 2026-03-19 04:45:44.839312 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:45:44.839319 | orchestrator | Thursday 19 March 2026 04:45:39 +0000 (0:00:00.246) 0:09:32.659 ******** 2026-03-19 04:45:44.839326 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839333 | orchestrator | 2026-03-19 04:45:44.839341 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:45:44.839348 | orchestrator | Thursday 19 March 2026 04:45:39 +0000 (0:00:00.474) 0:09:33.133 ******** 2026-03-19 04:45:44.839355 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839362 | orchestrator | 2026-03-19 04:45:44.839369 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:45:44.839377 | orchestrator | Thursday 19 March 2026 04:45:40 +0000 (0:00:00.137) 0:09:33.271 ******** 2026-03-19 04:45:44.839384 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839391 | orchestrator | 2026-03-19 04:45:44.839398 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:45:44.839405 | orchestrator | Thursday 19 March 2026 04:45:40 +0000 (0:00:00.514) 0:09:33.785 ******** 2026-03-19 04:45:44.839412 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839420 | orchestrator | 2026-03-19 04:45:44.839427 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:45:44.839434 | orchestrator | Thursday 19 March 2026 04:45:40 +0000 (0:00:00.136) 0:09:33.922 ******** 2026-03-19 04:45:44.839446 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839453 | orchestrator | 2026-03-19 04:45:44.839461 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:45:44.839468 | orchestrator | Thursday 19 March 2026 04:45:40 +0000 (0:00:00.160) 0:09:34.083 ******** 2026-03-19 04:45:44.839475 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839482 | orchestrator | 2026-03-19 04:45:44.839489 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:45:44.839497 | orchestrator | Thursday 19 March 2026 04:45:40 +0000 (0:00:00.151) 0:09:34.234 ******** 2026-03-19 04:45:44.839505 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:44.839512 | orchestrator | 2026-03-19 04:45:44.839519 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:45:44.839526 | orchestrator | Thursday 19 March 2026 04:45:41 +0000 (0:00:00.134) 0:09:34.368 ******** 2026-03-19 04:45:44.839533 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839541 | orchestrator | 2026-03-19 04:45:44.839548 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:45:44.839555 | orchestrator | Thursday 19 March 2026 04:45:41 +0000 (0:00:00.134) 0:09:34.503 ******** 2026-03-19 04:45:44.839562 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:45:44.839570 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:45:44.839577 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:45:44.839584 | orchestrator | 2026-03-19 04:45:44.839596 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:45:44.839603 | orchestrator | Thursday 19 March 2026 04:45:42 +0000 (0:00:00.910) 0:09:35.413 ******** 2026-03-19 04:45:44.839611 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:44.839618 | orchestrator | 2026-03-19 04:45:44.839625 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:45:44.839632 | orchestrator | Thursday 19 March 2026 04:45:42 +0000 (0:00:00.242) 0:09:35.656 ******** 2026-03-19 04:45:44.839640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:45:44.839651 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:45:49.525555 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:45:49.525630 | orchestrator | 2026-03-19 04:45:49.525639 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:45:49.525648 | orchestrator | Thursday 19 March 2026 04:45:44 +0000 (0:00:02.430) 0:09:38.087 ******** 2026-03-19 04:45:49.525655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:45:49.525662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:45:49.525668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:45:49.525674 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.525680 | orchestrator | 2026-03-19 04:45:49.525686 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:45:49.525692 | orchestrator | Thursday 19 March 2026 04:45:45 +0000 (0:00:00.401) 0:09:38.488 ******** 2026-03-19 04:45:49.525699 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525708 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525738 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.525744 | orchestrator | 2026-03-19 04:45:49.525750 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:45:49.525756 | orchestrator | Thursday 19 March 2026 04:45:45 +0000 (0:00:00.650) 0:09:39.139 ******** 2026-03-19 04:45:49.525763 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525772 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.525784 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.525790 | orchestrator | 2026-03-19 04:45:49.525796 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:45:49.525802 | orchestrator | Thursday 19 March 2026 04:45:46 +0000 (0:00:00.191) 0:09:39.330 ******** 2026-03-19 04:45:49.525821 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:45:43.256402', 'end': '2026-03-19 04:45:43.319181', 'delta': '0:00:00.062779', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:45:49.525842 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:45:43.802850', 'end': '2026-03-19 04:45:43.856290', 'delta': '0:00:00.053440', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:45:49.525889 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:45:44.381378', 'end': '2026-03-19 04:45:44.435647', 'delta': '0:00:00.054269', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:45:49.525902 | orchestrator | 2026-03-19 04:45:49.525908 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:45:49.525914 | orchestrator | Thursday 19 March 2026 04:45:46 +0000 (0:00:00.194) 0:09:39.524 ******** 2026-03-19 04:45:49.525919 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:49.525926 | orchestrator | 2026-03-19 04:45:49.525932 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:45:49.525937 | orchestrator | Thursday 19 March 2026 04:45:46 +0000 (0:00:00.258) 0:09:39.782 ******** 2026-03-19 04:45:49.525943 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.525967 | orchestrator | 2026-03-19 04:45:49.525973 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:45:49.525979 | orchestrator | Thursday 19 March 2026 04:45:46 +0000 (0:00:00.240) 0:09:40.023 ******** 2026-03-19 04:45:49.525985 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:49.525991 | orchestrator | 2026-03-19 04:45:49.525996 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:45:49.526002 | orchestrator | Thursday 19 March 2026 04:45:46 +0000 (0:00:00.142) 0:09:40.165 ******** 2026-03-19 04:45:49.526008 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:49.526013 | orchestrator | 2026-03-19 04:45:49.526057 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:45:49.526063 | orchestrator | Thursday 19 March 2026 04:45:47 +0000 (0:00:00.968) 0:09:41.134 ******** 2026-03-19 04:45:49.526069 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:49.526074 | orchestrator | 2026-03-19 04:45:49.526080 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:45:49.526086 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.151) 0:09:41.285 ******** 2026-03-19 04:45:49.526092 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526097 | orchestrator | 2026-03-19 04:45:49.526103 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:45:49.526109 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.130) 0:09:41.415 ******** 2026-03-19 04:45:49.526116 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526123 | orchestrator | 2026-03-19 04:45:49.526130 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:45:49.526136 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.215) 0:09:41.631 ******** 2026-03-19 04:45:49.526143 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526149 | orchestrator | 2026-03-19 04:45:49.526155 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:45:49.526162 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.116) 0:09:41.747 ******** 2026-03-19 04:45:49.526168 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526175 | orchestrator | 2026-03-19 04:45:49.526181 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:45:49.526188 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.127) 0:09:41.875 ******** 2026-03-19 04:45:49.526194 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526201 | orchestrator | 2026-03-19 04:45:49.526207 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:45:49.526214 | orchestrator | Thursday 19 March 2026 04:45:48 +0000 (0:00:00.366) 0:09:42.242 ******** 2026-03-19 04:45:49.526220 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526227 | orchestrator | 2026-03-19 04:45:49.526233 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:45:49.526240 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.130) 0:09:42.373 ******** 2026-03-19 04:45:49.526250 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526257 | orchestrator | 2026-03-19 04:45:49.526269 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:45:49.526275 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.139) 0:09:42.512 ******** 2026-03-19 04:45:49.526282 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526289 | orchestrator | 2026-03-19 04:45:49.526295 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:45:49.526302 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.139) 0:09:42.651 ******** 2026-03-19 04:45:49.526309 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.526315 | orchestrator | 2026-03-19 04:45:49.526328 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:45:49.991565 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.130) 0:09:42.782 ******** 2026-03-19 04:45:49.991669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:45:49.991723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:45:49.991823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:45:49.991844 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:49.991938 | orchestrator | 2026-03-19 04:45:49.991950 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:45:49.991960 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.225) 0:09:43.007 ******** 2026-03-19 04:45:49.991972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.991992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:49.992015 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020414 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020531 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020593 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020605 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020614 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:45:53.020623 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:53.020633 | orchestrator | 2026-03-19 04:45:53.020641 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:45:53.020651 | orchestrator | Thursday 19 March 2026 04:45:49 +0000 (0:00:00.241) 0:09:43.248 ******** 2026-03-19 04:45:53.020659 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:53.020667 | orchestrator | 2026-03-19 04:45:53.020675 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:45:53.020689 | orchestrator | Thursday 19 March 2026 04:45:50 +0000 (0:00:00.507) 0:09:43.756 ******** 2026-03-19 04:45:53.020698 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:53.020706 | orchestrator | 2026-03-19 04:45:53.020714 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:45:53.020722 | orchestrator | Thursday 19 March 2026 04:45:50 +0000 (0:00:00.136) 0:09:43.893 ******** 2026-03-19 04:45:53.020730 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:45:53.020738 | orchestrator | 2026-03-19 04:45:53.020746 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:45:53.020754 | orchestrator | Thursday 19 March 2026 04:45:51 +0000 (0:00:00.504) 0:09:44.398 ******** 2026-03-19 04:45:53.020762 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:53.020770 | orchestrator | 2026-03-19 04:45:53.020778 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:45:53.020786 | orchestrator | Thursday 19 March 2026 04:45:51 +0000 (0:00:00.135) 0:09:44.533 ******** 2026-03-19 04:45:53.020794 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:53.020802 | orchestrator | 2026-03-19 04:45:53.020810 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:45:53.020818 | orchestrator | Thursday 19 March 2026 04:45:51 +0000 (0:00:00.230) 0:09:44.764 ******** 2026-03-19 04:45:53.020825 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:53.020833 | orchestrator | 2026-03-19 04:45:53.020841 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:45:53.020985 | orchestrator | Thursday 19 March 2026 04:45:51 +0000 (0:00:00.135) 0:09:44.899 ******** 2026-03-19 04:45:53.021007 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:45:53.021017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:45:53.021026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:45:53.021035 | orchestrator | 2026-03-19 04:45:53.021045 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:45:53.021055 | orchestrator | Thursday 19 March 2026 04:45:52 +0000 (0:00:01.207) 0:09:46.107 ******** 2026-03-19 04:45:53.021065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:45:53.021074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:45:53.021084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:45:53.021093 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:45:53.021101 | orchestrator | 2026-03-19 04:45:53.021119 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:46:02.682930 | orchestrator | Thursday 19 March 2026 04:45:53 +0000 (0:00:00.163) 0:09:46.270 ******** 2026-03-19 04:46:02.683050 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683068 | orchestrator | 2026-03-19 04:46:02.683081 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:46:02.683093 | orchestrator | Thursday 19 March 2026 04:45:53 +0000 (0:00:00.140) 0:09:46.411 ******** 2026-03-19 04:46:02.683105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:46:02.683116 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:46:02.683128 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:46:02.683139 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:46:02.683150 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:46:02.683161 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:46:02.683172 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:46:02.683183 | orchestrator | 2026-03-19 04:46:02.683195 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:46:02.683232 | orchestrator | Thursday 19 March 2026 04:45:53 +0000 (0:00:00.775) 0:09:47.186 ******** 2026-03-19 04:46:02.683245 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:46:02.683256 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:46:02.683267 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:46:02.683278 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:46:02.683289 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:46:02.683300 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:46:02.683311 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:46:02.683322 | orchestrator | 2026-03-19 04:46:02.683334 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:46:02.683344 | orchestrator | Thursday 19 March 2026 04:45:55 +0000 (0:00:01.609) 0:09:48.795 ******** 2026-03-19 04:46:02.683355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-19 04:46:02.683367 | orchestrator | 2026-03-19 04:46:02.683378 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:46:02.683389 | orchestrator | Thursday 19 March 2026 04:45:55 +0000 (0:00:00.202) 0:09:48.997 ******** 2026-03-19 04:46:02.683400 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-19 04:46:02.683411 | orchestrator | 2026-03-19 04:46:02.683422 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:46:02.683434 | orchestrator | Thursday 19 March 2026 04:45:55 +0000 (0:00:00.225) 0:09:49.223 ******** 2026-03-19 04:46:02.683447 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.683461 | orchestrator | 2026-03-19 04:46:02.683473 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:46:02.683486 | orchestrator | Thursday 19 March 2026 04:45:56 +0000 (0:00:00.571) 0:09:49.795 ******** 2026-03-19 04:46:02.683500 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683512 | orchestrator | 2026-03-19 04:46:02.683525 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:46:02.683538 | orchestrator | Thursday 19 March 2026 04:45:56 +0000 (0:00:00.123) 0:09:49.918 ******** 2026-03-19 04:46:02.683551 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683564 | orchestrator | 2026-03-19 04:46:02.683576 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:46:02.683590 | orchestrator | Thursday 19 March 2026 04:45:56 +0000 (0:00:00.126) 0:09:50.045 ******** 2026-03-19 04:46:02.683602 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683615 | orchestrator | 2026-03-19 04:46:02.683628 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:46:02.683641 | orchestrator | Thursday 19 March 2026 04:45:57 +0000 (0:00:00.383) 0:09:50.428 ******** 2026-03-19 04:46:02.683654 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.683667 | orchestrator | 2026-03-19 04:46:02.683679 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:46:02.683692 | orchestrator | Thursday 19 March 2026 04:45:57 +0000 (0:00:00.627) 0:09:51.055 ******** 2026-03-19 04:46:02.683705 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683718 | orchestrator | 2026-03-19 04:46:02.683731 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:46:02.683758 | orchestrator | Thursday 19 March 2026 04:45:57 +0000 (0:00:00.136) 0:09:51.192 ******** 2026-03-19 04:46:02.683771 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683784 | orchestrator | 2026-03-19 04:46:02.683797 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:46:02.683808 | orchestrator | Thursday 19 March 2026 04:45:58 +0000 (0:00:00.133) 0:09:51.325 ******** 2026-03-19 04:46:02.683826 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.683860 | orchestrator | 2026-03-19 04:46:02.683874 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:46:02.683884 | orchestrator | Thursday 19 March 2026 04:45:58 +0000 (0:00:00.602) 0:09:51.928 ******** 2026-03-19 04:46:02.683895 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.683906 | orchestrator | 2026-03-19 04:46:02.683917 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:46:02.683945 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.573) 0:09:52.501 ******** 2026-03-19 04:46:02.683957 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.683968 | orchestrator | 2026-03-19 04:46:02.683979 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:46:02.683990 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.132) 0:09:52.634 ******** 2026-03-19 04:46:02.684001 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.684012 | orchestrator | 2026-03-19 04:46:02.684023 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:46:02.684034 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.153) 0:09:52.788 ******** 2026-03-19 04:46:02.684044 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684055 | orchestrator | 2026-03-19 04:46:02.684066 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:46:02.684077 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.129) 0:09:52.917 ******** 2026-03-19 04:46:02.684088 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684099 | orchestrator | 2026-03-19 04:46:02.684109 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:46:02.684120 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.134) 0:09:53.051 ******** 2026-03-19 04:46:02.684131 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684142 | orchestrator | 2026-03-19 04:46:02.684153 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:46:02.684164 | orchestrator | Thursday 19 March 2026 04:45:59 +0000 (0:00:00.130) 0:09:53.182 ******** 2026-03-19 04:46:02.684175 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684185 | orchestrator | 2026-03-19 04:46:02.684196 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:46:02.684207 | orchestrator | Thursday 19 March 2026 04:46:00 +0000 (0:00:00.136) 0:09:53.318 ******** 2026-03-19 04:46:02.684218 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684229 | orchestrator | 2026-03-19 04:46:02.684240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:46:02.684251 | orchestrator | Thursday 19 March 2026 04:46:00 +0000 (0:00:00.126) 0:09:53.445 ******** 2026-03-19 04:46:02.684261 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.684272 | orchestrator | 2026-03-19 04:46:02.684283 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:46:02.684294 | orchestrator | Thursday 19 March 2026 04:46:00 +0000 (0:00:00.436) 0:09:53.882 ******** 2026-03-19 04:46:02.684305 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.684316 | orchestrator | 2026-03-19 04:46:02.684326 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:46:02.684337 | orchestrator | Thursday 19 March 2026 04:46:00 +0000 (0:00:00.159) 0:09:54.042 ******** 2026-03-19 04:46:02.684348 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:02.684359 | orchestrator | 2026-03-19 04:46:02.684371 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:46:02.684381 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.229) 0:09:54.271 ******** 2026-03-19 04:46:02.684392 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684403 | orchestrator | 2026-03-19 04:46:02.684414 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:46:02.684425 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.134) 0:09:54.406 ******** 2026-03-19 04:46:02.684443 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684454 | orchestrator | 2026-03-19 04:46:02.684465 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:46:02.684476 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.126) 0:09:54.532 ******** 2026-03-19 04:46:02.684487 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684497 | orchestrator | 2026-03-19 04:46:02.684508 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:46:02.684519 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.115) 0:09:54.647 ******** 2026-03-19 04:46:02.684530 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684541 | orchestrator | 2026-03-19 04:46:02.684552 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:46:02.684563 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.129) 0:09:54.776 ******** 2026-03-19 04:46:02.684574 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684584 | orchestrator | 2026-03-19 04:46:02.684595 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:46:02.684606 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.122) 0:09:54.898 ******** 2026-03-19 04:46:02.684617 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684628 | orchestrator | 2026-03-19 04:46:02.684639 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:46:02.684650 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.130) 0:09:55.029 ******** 2026-03-19 04:46:02.684660 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684671 | orchestrator | 2026-03-19 04:46:02.684682 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:46:02.684693 | orchestrator | Thursday 19 March 2026 04:46:01 +0000 (0:00:00.133) 0:09:55.162 ******** 2026-03-19 04:46:02.684704 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684715 | orchestrator | 2026-03-19 04:46:02.684732 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:46:02.684743 | orchestrator | Thursday 19 March 2026 04:46:02 +0000 (0:00:00.141) 0:09:55.303 ******** 2026-03-19 04:46:02.684754 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684765 | orchestrator | 2026-03-19 04:46:02.684776 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:46:02.684787 | orchestrator | Thursday 19 March 2026 04:46:02 +0000 (0:00:00.138) 0:09:55.442 ******** 2026-03-19 04:46:02.684798 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684809 | orchestrator | 2026-03-19 04:46:02.684820 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:46:02.684831 | orchestrator | Thursday 19 March 2026 04:46:02 +0000 (0:00:00.129) 0:09:55.571 ******** 2026-03-19 04:46:02.684863 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:02.684875 | orchestrator | 2026-03-19 04:46:02.684891 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:46:19.979233 | orchestrator | Thursday 19 March 2026 04:46:02 +0000 (0:00:00.369) 0:09:55.940 ******** 2026-03-19 04:46:19.979377 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.979395 | orchestrator | 2026-03-19 04:46:19.979408 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:46:19.979419 | orchestrator | Thursday 19 March 2026 04:46:02 +0000 (0:00:00.196) 0:09:56.137 ******** 2026-03-19 04:46:19.979430 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.979442 | orchestrator | 2026-03-19 04:46:19.979453 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:46:19.979464 | orchestrator | Thursday 19 March 2026 04:46:03 +0000 (0:00:01.051) 0:09:57.189 ******** 2026-03-19 04:46:19.979475 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.979486 | orchestrator | 2026-03-19 04:46:19.979497 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:46:19.979508 | orchestrator | Thursday 19 March 2026 04:46:05 +0000 (0:00:01.465) 0:09:58.654 ******** 2026-03-19 04:46:19.979542 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-19 04:46:19.979555 | orchestrator | 2026-03-19 04:46:19.979566 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:46:19.979577 | orchestrator | Thursday 19 March 2026 04:46:05 +0000 (0:00:00.204) 0:09:58.858 ******** 2026-03-19 04:46:19.979593 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.979611 | orchestrator | 2026-03-19 04:46:19.979630 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:46:19.979650 | orchestrator | Thursday 19 March 2026 04:46:05 +0000 (0:00:00.131) 0:09:58.990 ******** 2026-03-19 04:46:19.979667 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.979683 | orchestrator | 2026-03-19 04:46:19.979694 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:46:19.979711 | orchestrator | Thursday 19 March 2026 04:46:05 +0000 (0:00:00.132) 0:09:59.123 ******** 2026-03-19 04:46:19.979729 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:46:19.979746 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:46:19.979766 | orchestrator | 2026-03-19 04:46:19.979784 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:46:19.979801 | orchestrator | Thursday 19 March 2026 04:46:06 +0000 (0:00:00.830) 0:09:59.953 ******** 2026-03-19 04:46:19.979821 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.979868 | orchestrator | 2026-03-19 04:46:19.979888 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:46:19.979903 | orchestrator | Thursday 19 March 2026 04:46:07 +0000 (0:00:00.478) 0:10:00.432 ******** 2026-03-19 04:46:19.979915 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.979928 | orchestrator | 2026-03-19 04:46:19.979941 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:46:19.979954 | orchestrator | Thursday 19 March 2026 04:46:07 +0000 (0:00:00.135) 0:10:00.568 ******** 2026-03-19 04:46:19.979966 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.979978 | orchestrator | 2026-03-19 04:46:19.979991 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:46:19.980003 | orchestrator | Thursday 19 March 2026 04:46:07 +0000 (0:00:00.123) 0:10:00.691 ******** 2026-03-19 04:46:19.980015 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980027 | orchestrator | 2026-03-19 04:46:19.980040 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:46:19.980052 | orchestrator | Thursday 19 March 2026 04:46:07 +0000 (0:00:00.349) 0:10:01.041 ******** 2026-03-19 04:46:19.980063 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-19 04:46:19.980073 | orchestrator | 2026-03-19 04:46:19.980084 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:46:19.980095 | orchestrator | Thursday 19 March 2026 04:46:07 +0000 (0:00:00.218) 0:10:01.260 ******** 2026-03-19 04:46:19.980106 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.980116 | orchestrator | 2026-03-19 04:46:19.980127 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:46:19.980138 | orchestrator | Thursday 19 March 2026 04:46:08 +0000 (0:00:00.781) 0:10:02.042 ******** 2026-03-19 04:46:19.980148 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:46:19.980159 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:46:19.980170 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:46:19.980181 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980191 | orchestrator | 2026-03-19 04:46:19.980202 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:46:19.980213 | orchestrator | Thursday 19 March 2026 04:46:08 +0000 (0:00:00.209) 0:10:02.251 ******** 2026-03-19 04:46:19.980234 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980244 | orchestrator | 2026-03-19 04:46:19.980256 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:46:19.980266 | orchestrator | Thursday 19 March 2026 04:46:09 +0000 (0:00:00.123) 0:10:02.374 ******** 2026-03-19 04:46:19.980277 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980288 | orchestrator | 2026-03-19 04:46:19.980299 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:46:19.980310 | orchestrator | Thursday 19 March 2026 04:46:09 +0000 (0:00:00.166) 0:10:02.541 ******** 2026-03-19 04:46:19.980321 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980332 | orchestrator | 2026-03-19 04:46:19.980343 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:46:19.980354 | orchestrator | Thursday 19 March 2026 04:46:09 +0000 (0:00:00.150) 0:10:02.691 ******** 2026-03-19 04:46:19.980365 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980375 | orchestrator | 2026-03-19 04:46:19.980406 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:46:19.980418 | orchestrator | Thursday 19 March 2026 04:46:09 +0000 (0:00:00.161) 0:10:02.853 ******** 2026-03-19 04:46:19.980428 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980439 | orchestrator | 2026-03-19 04:46:19.980450 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:46:19.980461 | orchestrator | Thursday 19 March 2026 04:46:09 +0000 (0:00:00.152) 0:10:03.006 ******** 2026-03-19 04:46:19.980472 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.980482 | orchestrator | 2026-03-19 04:46:19.980493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:46:19.980504 | orchestrator | Thursday 19 March 2026 04:46:11 +0000 (0:00:01.659) 0:10:04.665 ******** 2026-03-19 04:46:19.980515 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.980525 | orchestrator | 2026-03-19 04:46:19.980536 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:46:19.980547 | orchestrator | Thursday 19 March 2026 04:46:11 +0000 (0:00:00.143) 0:10:04.809 ******** 2026-03-19 04:46:19.980557 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-19 04:46:19.980568 | orchestrator | 2026-03-19 04:46:19.980579 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:46:19.980590 | orchestrator | Thursday 19 March 2026 04:46:11 +0000 (0:00:00.430) 0:10:05.239 ******** 2026-03-19 04:46:19.980600 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980611 | orchestrator | 2026-03-19 04:46:19.980665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:46:19.980677 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.161) 0:10:05.401 ******** 2026-03-19 04:46:19.980688 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980699 | orchestrator | 2026-03-19 04:46:19.980710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:46:19.980721 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.155) 0:10:05.556 ******** 2026-03-19 04:46:19.980732 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980743 | orchestrator | 2026-03-19 04:46:19.980754 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:46:19.980765 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.138) 0:10:05.695 ******** 2026-03-19 04:46:19.980776 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980787 | orchestrator | 2026-03-19 04:46:19.980798 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:46:19.980809 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.149) 0:10:05.845 ******** 2026-03-19 04:46:19.980820 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980874 | orchestrator | 2026-03-19 04:46:19.980885 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:46:19.980904 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.149) 0:10:05.995 ******** 2026-03-19 04:46:19.980915 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980926 | orchestrator | 2026-03-19 04:46:19.980937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:46:19.980948 | orchestrator | Thursday 19 March 2026 04:46:12 +0000 (0:00:00.149) 0:10:06.145 ******** 2026-03-19 04:46:19.980959 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.980970 | orchestrator | 2026-03-19 04:46:19.980980 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:46:19.980991 | orchestrator | Thursday 19 March 2026 04:46:13 +0000 (0:00:00.174) 0:10:06.319 ******** 2026-03-19 04:46:19.981002 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:19.981013 | orchestrator | 2026-03-19 04:46:19.981024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:46:19.981035 | orchestrator | Thursday 19 March 2026 04:46:13 +0000 (0:00:00.148) 0:10:06.467 ******** 2026-03-19 04:46:19.981045 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:19.981056 | orchestrator | 2026-03-19 04:46:19.981067 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:46:19.981078 | orchestrator | Thursday 19 March 2026 04:46:13 +0000 (0:00:00.228) 0:10:06.696 ******** 2026-03-19 04:46:19.981089 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-19 04:46:19.981100 | orchestrator | 2026-03-19 04:46:19.981111 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:46:19.981122 | orchestrator | Thursday 19 March 2026 04:46:13 +0000 (0:00:00.203) 0:10:06.900 ******** 2026-03-19 04:46:19.981133 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-19 04:46:19.981144 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-19 04:46:19.981155 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-19 04:46:19.981166 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-19 04:46:19.981177 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-19 04:46:19.981188 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-19 04:46:19.981199 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-19 04:46:19.981218 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:46:19.981238 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:46:19.981258 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:46:19.981277 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:46:19.981296 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:46:19.981316 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:46:19.981336 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:46:19.981354 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-19 04:46:19.981370 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-19 04:46:19.981381 | orchestrator | 2026-03-19 04:46:19.981406 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:46:38.811434 | orchestrator | Thursday 19 March 2026 04:46:19 +0000 (0:00:06.319) 0:10:13.219 ******** 2026-03-19 04:46:38.811524 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811533 | orchestrator | 2026-03-19 04:46:38.811540 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:46:38.811547 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.139) 0:10:13.358 ******** 2026-03-19 04:46:38.811553 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811559 | orchestrator | 2026-03-19 04:46:38.811565 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:46:38.811571 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.123) 0:10:13.482 ******** 2026-03-19 04:46:38.811596 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811602 | orchestrator | 2026-03-19 04:46:38.811608 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:46:38.811614 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.141) 0:10:13.624 ******** 2026-03-19 04:46:38.811627 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811633 | orchestrator | 2026-03-19 04:46:38.811638 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:46:38.811644 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.143) 0:10:13.768 ******** 2026-03-19 04:46:38.811650 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811656 | orchestrator | 2026-03-19 04:46:38.811662 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:46:38.811668 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.139) 0:10:13.907 ******** 2026-03-19 04:46:38.811673 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811679 | orchestrator | 2026-03-19 04:46:38.811685 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:46:38.811692 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.132) 0:10:14.039 ******** 2026-03-19 04:46:38.811698 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811703 | orchestrator | 2026-03-19 04:46:38.811709 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:46:38.811715 | orchestrator | Thursday 19 March 2026 04:46:20 +0000 (0:00:00.138) 0:10:14.177 ******** 2026-03-19 04:46:38.811721 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811726 | orchestrator | 2026-03-19 04:46:38.811732 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:46:38.811738 | orchestrator | Thursday 19 March 2026 04:46:21 +0000 (0:00:00.119) 0:10:14.297 ******** 2026-03-19 04:46:38.811744 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811750 | orchestrator | 2026-03-19 04:46:38.811756 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:46:38.811762 | orchestrator | Thursday 19 March 2026 04:46:21 +0000 (0:00:00.140) 0:10:14.437 ******** 2026-03-19 04:46:38.811767 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811773 | orchestrator | 2026-03-19 04:46:38.811779 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:46:38.811784 | orchestrator | Thursday 19 March 2026 04:46:21 +0000 (0:00:00.126) 0:10:14.564 ******** 2026-03-19 04:46:38.811790 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811796 | orchestrator | 2026-03-19 04:46:38.811801 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:46:38.811807 | orchestrator | Thursday 19 March 2026 04:46:21 +0000 (0:00:00.132) 0:10:14.697 ******** 2026-03-19 04:46:38.811854 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811861 | orchestrator | 2026-03-19 04:46:38.811866 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:46:38.811872 | orchestrator | Thursday 19 March 2026 04:46:21 +0000 (0:00:00.135) 0:10:14.832 ******** 2026-03-19 04:46:38.811878 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811884 | orchestrator | 2026-03-19 04:46:38.811889 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:46:38.811895 | orchestrator | Thursday 19 March 2026 04:46:22 +0000 (0:00:00.829) 0:10:15.661 ******** 2026-03-19 04:46:38.811901 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811907 | orchestrator | 2026-03-19 04:46:38.811913 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:46:38.811918 | orchestrator | Thursday 19 March 2026 04:46:22 +0000 (0:00:00.133) 0:10:15.795 ******** 2026-03-19 04:46:38.811924 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811930 | orchestrator | 2026-03-19 04:46:38.811935 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:46:38.811950 | orchestrator | Thursday 19 March 2026 04:46:22 +0000 (0:00:00.232) 0:10:16.027 ******** 2026-03-19 04:46:38.811955 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811961 | orchestrator | 2026-03-19 04:46:38.811967 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:46:38.811973 | orchestrator | Thursday 19 March 2026 04:46:22 +0000 (0:00:00.144) 0:10:16.171 ******** 2026-03-19 04:46:38.811990 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.811997 | orchestrator | 2026-03-19 04:46:38.812004 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:46:38.812012 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.142) 0:10:16.314 ******** 2026-03-19 04:46:38.812018 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812025 | orchestrator | 2026-03-19 04:46:38.812031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:46:38.812038 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.151) 0:10:16.465 ******** 2026-03-19 04:46:38.812054 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812061 | orchestrator | 2026-03-19 04:46:38.812074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:46:38.812081 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.139) 0:10:16.605 ******** 2026-03-19 04:46:38.812088 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812094 | orchestrator | 2026-03-19 04:46:38.812112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:46:38.812119 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.126) 0:10:16.731 ******** 2026-03-19 04:46:38.812126 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812132 | orchestrator | 2026-03-19 04:46:38.812139 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:46:38.812146 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.141) 0:10:16.873 ******** 2026-03-19 04:46:38.812152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:46:38.812159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:46:38.812166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:46:38.812173 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812179 | orchestrator | 2026-03-19 04:46:38.812185 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:46:38.812192 | orchestrator | Thursday 19 March 2026 04:46:23 +0000 (0:00:00.379) 0:10:17.252 ******** 2026-03-19 04:46:38.812199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:46:38.812205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:46:38.812212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:46:38.812218 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812224 | orchestrator | 2026-03-19 04:46:38.812231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:46:38.812237 | orchestrator | Thursday 19 March 2026 04:46:24 +0000 (0:00:00.376) 0:10:17.628 ******** 2026-03-19 04:46:38.812244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:46:38.812250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:46:38.812257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-19 04:46:38.812264 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812270 | orchestrator | 2026-03-19 04:46:38.812277 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:46:38.812283 | orchestrator | Thursday 19 March 2026 04:46:24 +0000 (0:00:00.384) 0:10:18.013 ******** 2026-03-19 04:46:38.812290 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812297 | orchestrator | 2026-03-19 04:46:38.812303 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:46:38.812315 | orchestrator | Thursday 19 March 2026 04:46:24 +0000 (0:00:00.130) 0:10:18.143 ******** 2026-03-19 04:46:38.812323 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-19 04:46:38.812329 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812336 | orchestrator | 2026-03-19 04:46:38.812343 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:46:38.812350 | orchestrator | Thursday 19 March 2026 04:46:25 +0000 (0:00:00.562) 0:10:18.706 ******** 2026-03-19 04:46:38.812356 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:38.812362 | orchestrator | 2026-03-19 04:46:38.812368 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:46:38.812374 | orchestrator | Thursday 19 March 2026 04:46:26 +0000 (0:00:00.835) 0:10:19.542 ******** 2026-03-19 04:46:38.812380 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:46:38.812386 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:46:38.812393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:46:38.812398 | orchestrator | 2026-03-19 04:46:38.812404 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:46:38.812410 | orchestrator | Thursday 19 March 2026 04:46:26 +0000 (0:00:00.637) 0:10:20.180 ******** 2026-03-19 04:46:38.812416 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-19 04:46:38.812422 | orchestrator | 2026-03-19 04:46:38.812428 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-19 04:46:38.812433 | orchestrator | Thursday 19 March 2026 04:46:27 +0000 (0:00:00.569) 0:10:20.750 ******** 2026-03-19 04:46:38.812439 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:38.812445 | orchestrator | 2026-03-19 04:46:38.812451 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-19 04:46:38.812457 | orchestrator | Thursday 19 March 2026 04:46:28 +0000 (0:00:00.546) 0:10:21.296 ******** 2026-03-19 04:46:38.812463 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:46:38.812468 | orchestrator | 2026-03-19 04:46:38.812474 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-19 04:46:38.812480 | orchestrator | Thursday 19 March 2026 04:46:28 +0000 (0:00:00.143) 0:10:21.440 ******** 2026-03-19 04:46:38.812486 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 04:46:38.812492 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 04:46:38.812498 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 04:46:38.812507 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-19 04:46:38.812513 | orchestrator | 2026-03-19 04:46:38.812519 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-19 04:46:38.812525 | orchestrator | Thursday 19 March 2026 04:46:35 +0000 (0:00:06.970) 0:10:28.411 ******** 2026-03-19 04:46:38.812531 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:46:38.812537 | orchestrator | 2026-03-19 04:46:38.812543 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-19 04:46:38.812548 | orchestrator | Thursday 19 March 2026 04:46:35 +0000 (0:00:00.199) 0:10:28.610 ******** 2026-03-19 04:46:38.812554 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 04:46:38.812560 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 04:46:38.812566 | orchestrator | 2026-03-19 04:46:38.812572 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:46:38.812578 | orchestrator | Thursday 19 March 2026 04:46:37 +0000 (0:00:02.363) 0:10:30.974 ******** 2026-03-19 04:46:38.812587 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-19 04:47:08.549681 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-19 04:47:08.549871 | orchestrator | 2026-03-19 04:47:08.549902 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-19 04:47:08.549923 | orchestrator | Thursday 19 March 2026 04:46:38 +0000 (0:00:01.090) 0:10:32.065 ******** 2026-03-19 04:47:08.549975 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:47:08.549998 | orchestrator | 2026-03-19 04:47:08.550090 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-19 04:47:08.550113 | orchestrator | Thursday 19 March 2026 04:46:39 +0000 (0:00:00.559) 0:10:32.625 ******** 2026-03-19 04:47:08.550131 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:47:08.550148 | orchestrator | 2026-03-19 04:47:08.550159 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:47:08.550170 | orchestrator | Thursday 19 March 2026 04:46:39 +0000 (0:00:00.393) 0:10:33.018 ******** 2026-03-19 04:47:08.550181 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:47:08.550192 | orchestrator | 2026-03-19 04:47:08.550202 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:47:08.550214 | orchestrator | Thursday 19 March 2026 04:46:39 +0000 (0:00:00.125) 0:10:33.144 ******** 2026-03-19 04:47:08.550226 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-19 04:47:08.550238 | orchestrator | 2026-03-19 04:47:08.550248 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-19 04:47:08.550259 | orchestrator | Thursday 19 March 2026 04:46:40 +0000 (0:00:00.557) 0:10:33.701 ******** 2026-03-19 04:47:08.550270 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:47:08.550281 | orchestrator | 2026-03-19 04:47:08.550292 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-19 04:47:08.550303 | orchestrator | Thursday 19 March 2026 04:46:40 +0000 (0:00:00.193) 0:10:33.895 ******** 2026-03-19 04:47:08.550318 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:47:08.550330 | orchestrator | 2026-03-19 04:47:08.550341 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-19 04:47:08.550352 | orchestrator | Thursday 19 March 2026 04:46:40 +0000 (0:00:00.141) 0:10:34.036 ******** 2026-03-19 04:47:08.550363 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-19 04:47:08.550374 | orchestrator | 2026-03-19 04:47:08.550385 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-19 04:47:08.550396 | orchestrator | Thursday 19 March 2026 04:46:41 +0000 (0:00:00.560) 0:10:34.597 ******** 2026-03-19 04:47:08.550407 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:47:08.550417 | orchestrator | 2026-03-19 04:47:08.550428 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-19 04:47:08.550439 | orchestrator | Thursday 19 March 2026 04:46:42 +0000 (0:00:01.085) 0:10:35.683 ******** 2026-03-19 04:47:08.550450 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:47:08.550461 | orchestrator | 2026-03-19 04:47:08.550472 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-19 04:47:08.550483 | orchestrator | Thursday 19 March 2026 04:46:43 +0000 (0:00:01.020) 0:10:36.704 ******** 2026-03-19 04:47:08.550493 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:47:08.550504 | orchestrator | 2026-03-19 04:47:08.550515 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-19 04:47:08.550526 | orchestrator | Thursday 19 March 2026 04:46:44 +0000 (0:00:01.445) 0:10:38.149 ******** 2026-03-19 04:47:08.550537 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:47:08.550548 | orchestrator | 2026-03-19 04:47:08.550559 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:47:08.550569 | orchestrator | Thursday 19 March 2026 04:46:47 +0000 (0:00:02.936) 0:10:41.085 ******** 2026-03-19 04:47:08.550580 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:47:08.550591 | orchestrator | 2026-03-19 04:47:08.550602 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-19 04:47:08.550613 | orchestrator | 2026-03-19 04:47:08.550624 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:47:08.550635 | orchestrator | Thursday 19 March 2026 04:46:48 +0000 (0:00:00.858) 0:10:41.943 ******** 2026-03-19 04:47:08.550646 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:47:08.550667 | orchestrator | 2026-03-19 04:47:08.550678 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-19 04:47:08.550689 | orchestrator | Thursday 19 March 2026 04:47:00 +0000 (0:00:12.005) 0:10:53.949 ******** 2026-03-19 04:47:08.550700 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:47:08.550711 | orchestrator | 2026-03-19 04:47:08.550721 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:47:08.550732 | orchestrator | Thursday 19 March 2026 04:47:02 +0000 (0:00:01.560) 0:10:55.509 ******** 2026-03-19 04:47:08.550743 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-19 04:47:08.550755 | orchestrator | 2026-03-19 04:47:08.550773 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:47:08.550835 | orchestrator | Thursday 19 March 2026 04:47:02 +0000 (0:00:00.223) 0:10:55.733 ******** 2026-03-19 04:47:08.550848 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.550866 | orchestrator | 2026-03-19 04:47:08.550880 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:47:08.550891 | orchestrator | Thursday 19 March 2026 04:47:02 +0000 (0:00:00.499) 0:10:56.233 ******** 2026-03-19 04:47:08.550902 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.550913 | orchestrator | 2026-03-19 04:47:08.550923 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:47:08.550934 | orchestrator | Thursday 19 March 2026 04:47:03 +0000 (0:00:00.137) 0:10:56.370 ******** 2026-03-19 04:47:08.550945 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.550956 | orchestrator | 2026-03-19 04:47:08.550967 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:47:08.550978 | orchestrator | Thursday 19 March 2026 04:47:03 +0000 (0:00:00.476) 0:10:56.847 ******** 2026-03-19 04:47:08.550989 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.551000 | orchestrator | 2026-03-19 04:47:08.551032 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:47:08.551044 | orchestrator | Thursday 19 March 2026 04:47:03 +0000 (0:00:00.138) 0:10:56.985 ******** 2026-03-19 04:47:08.551055 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.551066 | orchestrator | 2026-03-19 04:47:08.551077 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:47:08.551087 | orchestrator | Thursday 19 March 2026 04:47:03 +0000 (0:00:00.136) 0:10:57.121 ******** 2026-03-19 04:47:08.551098 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.551109 | orchestrator | 2026-03-19 04:47:08.551120 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:47:08.551130 | orchestrator | Thursday 19 March 2026 04:47:04 +0000 (0:00:00.151) 0:10:57.273 ******** 2026-03-19 04:47:08.551141 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:08.551152 | orchestrator | 2026-03-19 04:47:08.551163 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:47:08.551173 | orchestrator | Thursday 19 March 2026 04:47:04 +0000 (0:00:00.143) 0:10:57.417 ******** 2026-03-19 04:47:08.551184 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.551195 | orchestrator | 2026-03-19 04:47:08.551206 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:47:08.551222 | orchestrator | Thursday 19 March 2026 04:47:04 +0000 (0:00:00.356) 0:10:57.773 ******** 2026-03-19 04:47:08.551240 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:47:08.551277 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:47:08.551296 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:47:08.551311 | orchestrator | 2026-03-19 04:47:08.551327 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:47:08.551343 | orchestrator | Thursday 19 March 2026 04:47:05 +0000 (0:00:00.651) 0:10:58.424 ******** 2026-03-19 04:47:08.551360 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:08.551389 | orchestrator | 2026-03-19 04:47:08.551406 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:47:08.551423 | orchestrator | Thursday 19 March 2026 04:47:05 +0000 (0:00:00.254) 0:10:58.678 ******** 2026-03-19 04:47:08.551440 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:47:08.551458 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:47:08.551473 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:47:08.551489 | orchestrator | 2026-03-19 04:47:08.551506 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:47:08.551523 | orchestrator | Thursday 19 March 2026 04:47:07 +0000 (0:00:01.932) 0:11:00.611 ******** 2026-03-19 04:47:08.551540 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:47:08.551558 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:47:08.551576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:47:08.551594 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:08.551612 | orchestrator | 2026-03-19 04:47:08.551630 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:47:08.551648 | orchestrator | Thursday 19 March 2026 04:47:07 +0000 (0:00:00.405) 0:11:01.017 ******** 2026-03-19 04:47:08.551668 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:47:08.551689 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:47:08.551709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:47:08.551726 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:08.551745 | orchestrator | 2026-03-19 04:47:08.551761 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:47:08.551779 | orchestrator | Thursday 19 March 2026 04:47:08 +0000 (0:00:00.609) 0:11:01.627 ******** 2026-03-19 04:47:08.551839 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:08.551863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:08.551903 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.325135 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325217 | orchestrator | 2026-03-19 04:47:13.325226 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:47:13.325255 | orchestrator | Thursday 19 March 2026 04:47:08 +0000 (0:00:00.172) 0:11:01.800 ******** 2026-03-19 04:47:13.325264 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:47:05.965948', 'end': '2026-03-19 04:47:06.022812', 'delta': '0:00:00.056864', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:47:13.325273 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:47:06.588076', 'end': '2026-03-19 04:47:06.633677', 'delta': '0:00:00.045601', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:47:13.325280 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:47:07.130894', 'end': '2026-03-19 04:47:07.179296', 'delta': '0:00:00.048402', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:47:13.325286 | orchestrator | 2026-03-19 04:47:13.325292 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:47:13.325298 | orchestrator | Thursday 19 March 2026 04:47:08 +0000 (0:00:00.187) 0:11:01.987 ******** 2026-03-19 04:47:13.325304 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:13.325310 | orchestrator | 2026-03-19 04:47:13.325316 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:47:13.325322 | orchestrator | Thursday 19 March 2026 04:47:08 +0000 (0:00:00.254) 0:11:02.242 ******** 2026-03-19 04:47:13.325328 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325333 | orchestrator | 2026-03-19 04:47:13.325339 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:47:13.325355 | orchestrator | Thursday 19 March 2026 04:47:09 +0000 (0:00:00.259) 0:11:02.502 ******** 2026-03-19 04:47:13.325361 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:13.325367 | orchestrator | 2026-03-19 04:47:13.325373 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:47:13.325379 | orchestrator | Thursday 19 March 2026 04:47:09 +0000 (0:00:00.139) 0:11:02.642 ******** 2026-03-19 04:47:13.325385 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:47:13.325391 | orchestrator | 2026-03-19 04:47:13.325397 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:47:13.325402 | orchestrator | Thursday 19 March 2026 04:47:11 +0000 (0:00:01.983) 0:11:04.625 ******** 2026-03-19 04:47:13.325408 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:13.325418 | orchestrator | 2026-03-19 04:47:13.325424 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:47:13.325430 | orchestrator | Thursday 19 March 2026 04:47:11 +0000 (0:00:00.147) 0:11:04.772 ******** 2026-03-19 04:47:13.325436 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325441 | orchestrator | 2026-03-19 04:47:13.325447 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:47:13.325465 | orchestrator | Thursday 19 March 2026 04:47:11 +0000 (0:00:00.400) 0:11:05.172 ******** 2026-03-19 04:47:13.325471 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325477 | orchestrator | 2026-03-19 04:47:13.325483 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:47:13.325488 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.231) 0:11:05.404 ******** 2026-03-19 04:47:13.325502 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325508 | orchestrator | 2026-03-19 04:47:13.325525 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:47:13.325531 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.132) 0:11:05.536 ******** 2026-03-19 04:47:13.325537 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325543 | orchestrator | 2026-03-19 04:47:13.325548 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:47:13.325554 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.130) 0:11:05.667 ******** 2026-03-19 04:47:13.325560 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325565 | orchestrator | 2026-03-19 04:47:13.325571 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:47:13.325577 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.140) 0:11:05.807 ******** 2026-03-19 04:47:13.325582 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325588 | orchestrator | 2026-03-19 04:47:13.325594 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:47:13.325600 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.142) 0:11:05.949 ******** 2026-03-19 04:47:13.325606 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325611 | orchestrator | 2026-03-19 04:47:13.325617 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:47:13.325623 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.135) 0:11:06.085 ******** 2026-03-19 04:47:13.325629 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325634 | orchestrator | 2026-03-19 04:47:13.325640 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:47:13.325646 | orchestrator | Thursday 19 March 2026 04:47:12 +0000 (0:00:00.130) 0:11:06.215 ******** 2026-03-19 04:47:13.325652 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.325658 | orchestrator | 2026-03-19 04:47:13.325663 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:47:13.325669 | orchestrator | Thursday 19 March 2026 04:47:13 +0000 (0:00:00.138) 0:11:06.353 ******** 2026-03-19 04:47:13.325675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.325683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.325689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.325705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:47:13.325713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.325721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.325733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.569348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:47:13.569497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.569527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:47:13.569540 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:13.569554 | orchestrator | 2026-03-19 04:47:13.569566 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:47:13.569578 | orchestrator | Thursday 19 March 2026 04:47:13 +0000 (0:00:00.223) 0:11:06.577 ******** 2026-03-19 04:47:13.569591 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569623 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569636 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569648 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569669 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569698 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:13.569721 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c07a66a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1', 'scsi-SQEMU_QEMU_HARDDISK_c07a66a6-3757-4ce2-8e0d-a91c5f9d99c1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:23.599394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:23.599504 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:47:23.599514 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599522 | orchestrator | 2026-03-19 04:47:23.599529 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:47:23.599536 | orchestrator | Thursday 19 March 2026 04:47:13 +0000 (0:00:00.244) 0:11:06.821 ******** 2026-03-19 04:47:23.599542 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.599549 | orchestrator | 2026-03-19 04:47:23.599555 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:47:23.599562 | orchestrator | Thursday 19 March 2026 04:47:14 +0000 (0:00:00.501) 0:11:07.323 ******** 2026-03-19 04:47:23.599568 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.599574 | orchestrator | 2026-03-19 04:47:23.599580 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:47:23.599586 | orchestrator | Thursday 19 March 2026 04:47:14 +0000 (0:00:00.130) 0:11:07.453 ******** 2026-03-19 04:47:23.599593 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.599599 | orchestrator | 2026-03-19 04:47:23.599605 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:47:23.599611 | orchestrator | Thursday 19 March 2026 04:47:14 +0000 (0:00:00.724) 0:11:08.177 ******** 2026-03-19 04:47:23.599617 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599623 | orchestrator | 2026-03-19 04:47:23.599640 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:47:23.599652 | orchestrator | Thursday 19 March 2026 04:47:15 +0000 (0:00:00.149) 0:11:08.327 ******** 2026-03-19 04:47:23.599658 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599664 | orchestrator | 2026-03-19 04:47:23.599670 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:47:23.599675 | orchestrator | Thursday 19 March 2026 04:47:15 +0000 (0:00:00.226) 0:11:08.554 ******** 2026-03-19 04:47:23.599681 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599687 | orchestrator | 2026-03-19 04:47:23.599693 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:47:23.599699 | orchestrator | Thursday 19 March 2026 04:47:15 +0000 (0:00:00.165) 0:11:08.720 ******** 2026-03-19 04:47:23.599706 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-19 04:47:23.599713 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:47:23.599740 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-19 04:47:23.599746 | orchestrator | 2026-03-19 04:47:23.599752 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:47:23.599758 | orchestrator | Thursday 19 March 2026 04:47:16 +0000 (0:00:00.688) 0:11:09.409 ******** 2026-03-19 04:47:23.599764 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-19 04:47:23.599771 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-19 04:47:23.599777 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-19 04:47:23.599831 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599837 | orchestrator | 2026-03-19 04:47:23.599844 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:47:23.599849 | orchestrator | Thursday 19 March 2026 04:47:16 +0000 (0:00:00.165) 0:11:09.574 ******** 2026-03-19 04:47:23.599856 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.599862 | orchestrator | 2026-03-19 04:47:23.599867 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:47:23.599873 | orchestrator | Thursday 19 March 2026 04:47:16 +0000 (0:00:00.138) 0:11:09.713 ******** 2026-03-19 04:47:23.599879 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:47:23.599887 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:47:23.599893 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:47:23.599899 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:47:23.599905 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:47:23.599911 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:47:23.599931 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:47:23.599937 | orchestrator | 2026-03-19 04:47:23.599944 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:47:23.599950 | orchestrator | Thursday 19 March 2026 04:47:17 +0000 (0:00:01.101) 0:11:10.815 ******** 2026-03-19 04:47:23.599956 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:47:23.599962 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:47:23.599968 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:47:23.599974 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:47:23.599981 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:47:23.599987 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:47:23.599999 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:47:23.600028 | orchestrator | 2026-03-19 04:47:23.600035 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:47:23.600041 | orchestrator | Thursday 19 March 2026 04:47:19 +0000 (0:00:01.577) 0:11:12.393 ******** 2026-03-19 04:47:23.600048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-19 04:47:23.600056 | orchestrator | 2026-03-19 04:47:23.600062 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:47:23.600068 | orchestrator | Thursday 19 March 2026 04:47:19 +0000 (0:00:00.196) 0:11:12.590 ******** 2026-03-19 04:47:23.600075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-19 04:47:23.600081 | orchestrator | 2026-03-19 04:47:23.600088 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:47:23.600094 | orchestrator | Thursday 19 March 2026 04:47:19 +0000 (0:00:00.460) 0:11:13.050 ******** 2026-03-19 04:47:23.600108 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.600114 | orchestrator | 2026-03-19 04:47:23.600120 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:47:23.600126 | orchestrator | Thursday 19 March 2026 04:47:20 +0000 (0:00:00.545) 0:11:13.596 ******** 2026-03-19 04:47:23.600132 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600139 | orchestrator | 2026-03-19 04:47:23.600144 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:47:23.600151 | orchestrator | Thursday 19 March 2026 04:47:20 +0000 (0:00:00.150) 0:11:13.746 ******** 2026-03-19 04:47:23.600156 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600163 | orchestrator | 2026-03-19 04:47:23.600169 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:47:23.600175 | orchestrator | Thursday 19 March 2026 04:47:20 +0000 (0:00:00.134) 0:11:13.881 ******** 2026-03-19 04:47:23.600181 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600187 | orchestrator | 2026-03-19 04:47:23.600193 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:47:23.600199 | orchestrator | Thursday 19 March 2026 04:47:20 +0000 (0:00:00.125) 0:11:14.006 ******** 2026-03-19 04:47:23.600205 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.600211 | orchestrator | 2026-03-19 04:47:23.600217 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:47:23.600223 | orchestrator | Thursday 19 March 2026 04:47:21 +0000 (0:00:00.566) 0:11:14.573 ******** 2026-03-19 04:47:23.600229 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600235 | orchestrator | 2026-03-19 04:47:23.600241 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:47:23.600247 | orchestrator | Thursday 19 March 2026 04:47:21 +0000 (0:00:00.131) 0:11:14.704 ******** 2026-03-19 04:47:23.600252 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600258 | orchestrator | 2026-03-19 04:47:23.600264 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:47:23.600269 | orchestrator | Thursday 19 March 2026 04:47:21 +0000 (0:00:00.118) 0:11:14.822 ******** 2026-03-19 04:47:23.600275 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.600281 | orchestrator | 2026-03-19 04:47:23.600287 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:47:23.600293 | orchestrator | Thursday 19 March 2026 04:47:22 +0000 (0:00:00.563) 0:11:15.387 ******** 2026-03-19 04:47:23.600299 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.600304 | orchestrator | 2026-03-19 04:47:23.600310 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:47:23.600316 | orchestrator | Thursday 19 March 2026 04:47:22 +0000 (0:00:00.571) 0:11:15.958 ******** 2026-03-19 04:47:23.600322 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600328 | orchestrator | 2026-03-19 04:47:23.600334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:47:23.600340 | orchestrator | Thursday 19 March 2026 04:47:22 +0000 (0:00:00.118) 0:11:16.076 ******** 2026-03-19 04:47:23.600345 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:23.600352 | orchestrator | 2026-03-19 04:47:23.600358 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:47:23.600363 | orchestrator | Thursday 19 March 2026 04:47:22 +0000 (0:00:00.144) 0:11:16.221 ******** 2026-03-19 04:47:23.600369 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600375 | orchestrator | 2026-03-19 04:47:23.600381 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:47:23.600386 | orchestrator | Thursday 19 March 2026 04:47:23 +0000 (0:00:00.137) 0:11:16.359 ******** 2026-03-19 04:47:23.600393 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:23.600398 | orchestrator | 2026-03-19 04:47:23.600405 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:47:23.600411 | orchestrator | Thursday 19 March 2026 04:47:23 +0000 (0:00:00.116) 0:11:16.476 ******** 2026-03-19 04:47:23.600430 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232320 | orchestrator | 2026-03-19 04:47:35.232401 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:47:35.232408 | orchestrator | Thursday 19 March 2026 04:47:23 +0000 (0:00:00.381) 0:11:16.857 ******** 2026-03-19 04:47:35.232413 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232418 | orchestrator | 2026-03-19 04:47:35.232423 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:47:35.232427 | orchestrator | Thursday 19 March 2026 04:47:23 +0000 (0:00:00.133) 0:11:16.991 ******** 2026-03-19 04:47:35.232431 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232435 | orchestrator | 2026-03-19 04:47:35.232439 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:47:35.232444 | orchestrator | Thursday 19 March 2026 04:47:23 +0000 (0:00:00.136) 0:11:17.127 ******** 2026-03-19 04:47:35.232450 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.232457 | orchestrator | 2026-03-19 04:47:35.232463 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:47:35.232484 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.146) 0:11:17.274 ******** 2026-03-19 04:47:35.232491 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.232497 | orchestrator | 2026-03-19 04:47:35.232503 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:47:35.232509 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.156) 0:11:17.430 ******** 2026-03-19 04:47:35.232516 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.232522 | orchestrator | 2026-03-19 04:47:35.232528 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:47:35.232535 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.262) 0:11:17.693 ******** 2026-03-19 04:47:35.232541 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232547 | orchestrator | 2026-03-19 04:47:35.232553 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:47:35.232560 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.129) 0:11:17.822 ******** 2026-03-19 04:47:35.232566 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232573 | orchestrator | 2026-03-19 04:47:35.232580 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:47:35.232584 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.130) 0:11:17.952 ******** 2026-03-19 04:47:35.232588 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232591 | orchestrator | 2026-03-19 04:47:35.232595 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:47:35.232599 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.139) 0:11:18.092 ******** 2026-03-19 04:47:35.232603 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232607 | orchestrator | 2026-03-19 04:47:35.232610 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:47:35.232614 | orchestrator | Thursday 19 March 2026 04:47:24 +0000 (0:00:00.131) 0:11:18.223 ******** 2026-03-19 04:47:35.232618 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232622 | orchestrator | 2026-03-19 04:47:35.232625 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:47:35.232629 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.123) 0:11:18.346 ******** 2026-03-19 04:47:35.232633 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232637 | orchestrator | 2026-03-19 04:47:35.232640 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:47:35.232644 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.133) 0:11:18.479 ******** 2026-03-19 04:47:35.232648 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232652 | orchestrator | 2026-03-19 04:47:35.232656 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:47:35.232661 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.388) 0:11:18.867 ******** 2026-03-19 04:47:35.232678 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232682 | orchestrator | 2026-03-19 04:47:35.232686 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:47:35.232690 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.132) 0:11:19.000 ******** 2026-03-19 04:47:35.232693 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232698 | orchestrator | 2026-03-19 04:47:35.232701 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:47:35.232705 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.135) 0:11:19.135 ******** 2026-03-19 04:47:35.232709 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232712 | orchestrator | 2026-03-19 04:47:35.232716 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:47:35.232720 | orchestrator | Thursday 19 March 2026 04:47:25 +0000 (0:00:00.119) 0:11:19.255 ******** 2026-03-19 04:47:35.232724 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232729 | orchestrator | 2026-03-19 04:47:35.232735 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:47:35.232745 | orchestrator | Thursday 19 March 2026 04:47:26 +0000 (0:00:00.134) 0:11:19.390 ******** 2026-03-19 04:47:35.232752 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232757 | orchestrator | 2026-03-19 04:47:35.232763 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:47:35.232769 | orchestrator | Thursday 19 March 2026 04:47:26 +0000 (0:00:00.203) 0:11:19.593 ******** 2026-03-19 04:47:35.232817 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.232825 | orchestrator | 2026-03-19 04:47:35.232831 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:47:35.232838 | orchestrator | Thursday 19 March 2026 04:47:27 +0000 (0:00:00.946) 0:11:20.540 ******** 2026-03-19 04:47:35.232844 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.232850 | orchestrator | 2026-03-19 04:47:35.232856 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:47:35.232863 | orchestrator | Thursday 19 March 2026 04:47:28 +0000 (0:00:01.395) 0:11:21.936 ******** 2026-03-19 04:47:35.232872 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-19 04:47:35.232881 | orchestrator | 2026-03-19 04:47:35.232903 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:47:35.232911 | orchestrator | Thursday 19 March 2026 04:47:28 +0000 (0:00:00.228) 0:11:22.164 ******** 2026-03-19 04:47:35.232917 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232923 | orchestrator | 2026-03-19 04:47:35.232930 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:47:35.232937 | orchestrator | Thursday 19 March 2026 04:47:29 +0000 (0:00:00.139) 0:11:22.304 ******** 2026-03-19 04:47:35.232944 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.232950 | orchestrator | 2026-03-19 04:47:35.232957 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:47:35.232964 | orchestrator | Thursday 19 March 2026 04:47:29 +0000 (0:00:00.127) 0:11:22.431 ******** 2026-03-19 04:47:35.232971 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:47:35.232977 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:47:35.232985 | orchestrator | 2026-03-19 04:47:35.232995 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:47:35.233000 | orchestrator | Thursday 19 March 2026 04:47:30 +0000 (0:00:01.098) 0:11:23.530 ******** 2026-03-19 04:47:35.233004 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.233008 | orchestrator | 2026-03-19 04:47:35.233013 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:47:35.233017 | orchestrator | Thursday 19 March 2026 04:47:30 +0000 (0:00:00.482) 0:11:24.012 ******** 2026-03-19 04:47:35.233027 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233031 | orchestrator | 2026-03-19 04:47:35.233036 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:47:35.233040 | orchestrator | Thursday 19 March 2026 04:47:30 +0000 (0:00:00.157) 0:11:24.169 ******** 2026-03-19 04:47:35.233044 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233049 | orchestrator | 2026-03-19 04:47:35.233053 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:47:35.233058 | orchestrator | Thursday 19 March 2026 04:47:31 +0000 (0:00:00.141) 0:11:24.311 ******** 2026-03-19 04:47:35.233062 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233066 | orchestrator | 2026-03-19 04:47:35.233071 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:47:35.233082 | orchestrator | Thursday 19 March 2026 04:47:31 +0000 (0:00:00.139) 0:11:24.450 ******** 2026-03-19 04:47:35.233087 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-19 04:47:35.233091 | orchestrator | 2026-03-19 04:47:35.233096 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:47:35.233100 | orchestrator | Thursday 19 March 2026 04:47:31 +0000 (0:00:00.202) 0:11:24.652 ******** 2026-03-19 04:47:35.233105 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.233109 | orchestrator | 2026-03-19 04:47:35.233114 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:47:35.233118 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.735) 0:11:25.388 ******** 2026-03-19 04:47:35.233123 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:47:35.233127 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:47:35.233131 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:47:35.233135 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233139 | orchestrator | 2026-03-19 04:47:35.233143 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:47:35.233147 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.146) 0:11:25.534 ******** 2026-03-19 04:47:35.233150 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233154 | orchestrator | 2026-03-19 04:47:35.233158 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:47:35.233162 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.125) 0:11:25.659 ******** 2026-03-19 04:47:35.233166 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233169 | orchestrator | 2026-03-19 04:47:35.233173 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:47:35.233177 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.171) 0:11:25.830 ******** 2026-03-19 04:47:35.233181 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233185 | orchestrator | 2026-03-19 04:47:35.233188 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:47:35.233192 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.150) 0:11:25.981 ******** 2026-03-19 04:47:35.233196 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233200 | orchestrator | 2026-03-19 04:47:35.233203 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:47:35.233207 | orchestrator | Thursday 19 March 2026 04:47:32 +0000 (0:00:00.170) 0:11:26.151 ******** 2026-03-19 04:47:35.233211 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:35.233215 | orchestrator | 2026-03-19 04:47:35.233219 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:47:35.233222 | orchestrator | Thursday 19 March 2026 04:47:33 +0000 (0:00:00.397) 0:11:26.548 ******** 2026-03-19 04:47:35.233226 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.233230 | orchestrator | 2026-03-19 04:47:35.233234 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:47:35.233241 | orchestrator | Thursday 19 March 2026 04:47:34 +0000 (0:00:01.573) 0:11:28.122 ******** 2026-03-19 04:47:35.233245 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:35.233249 | orchestrator | 2026-03-19 04:47:35.233252 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:47:35.233256 | orchestrator | Thursday 19 March 2026 04:47:34 +0000 (0:00:00.137) 0:11:28.260 ******** 2026-03-19 04:47:35.233260 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-19 04:47:35.233264 | orchestrator | 2026-03-19 04:47:35.233271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:47:47.795907 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.225) 0:11:28.485 ******** 2026-03-19 04:47:47.796028 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796045 | orchestrator | 2026-03-19 04:47:47.796075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:47:47.796097 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.135) 0:11:28.621 ******** 2026-03-19 04:47:47.796109 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796120 | orchestrator | 2026-03-19 04:47:47.796131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:47:47.796142 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.144) 0:11:28.765 ******** 2026-03-19 04:47:47.796153 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796164 | orchestrator | 2026-03-19 04:47:47.796174 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:47:47.796201 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.155) 0:11:28.921 ******** 2026-03-19 04:47:47.796212 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796224 | orchestrator | 2026-03-19 04:47:47.796235 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:47:47.796246 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.146) 0:11:29.067 ******** 2026-03-19 04:47:47.796257 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796267 | orchestrator | 2026-03-19 04:47:47.796278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:47:47.796289 | orchestrator | Thursday 19 March 2026 04:47:35 +0000 (0:00:00.159) 0:11:29.227 ******** 2026-03-19 04:47:47.796300 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796310 | orchestrator | 2026-03-19 04:47:47.796321 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:47:47.796332 | orchestrator | Thursday 19 March 2026 04:47:36 +0000 (0:00:00.149) 0:11:29.377 ******** 2026-03-19 04:47:47.796343 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796353 | orchestrator | 2026-03-19 04:47:47.796367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:47:47.796379 | orchestrator | Thursday 19 March 2026 04:47:36 +0000 (0:00:00.145) 0:11:29.522 ******** 2026-03-19 04:47:47.796391 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796404 | orchestrator | 2026-03-19 04:47:47.796416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:47:47.796428 | orchestrator | Thursday 19 March 2026 04:47:36 +0000 (0:00:00.137) 0:11:29.659 ******** 2026-03-19 04:47:47.796441 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:47:47.796453 | orchestrator | 2026-03-19 04:47:47.796466 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:47:47.796478 | orchestrator | Thursday 19 March 2026 04:47:36 +0000 (0:00:00.437) 0:11:30.097 ******** 2026-03-19 04:47:47.796490 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-19 04:47:47.796503 | orchestrator | 2026-03-19 04:47:47.796515 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:47:47.796527 | orchestrator | Thursday 19 March 2026 04:47:37 +0000 (0:00:00.215) 0:11:30.313 ******** 2026-03-19 04:47:47.796540 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-19 04:47:47.796578 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-19 04:47:47.796591 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-19 04:47:47.796603 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-19 04:47:47.796615 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-19 04:47:47.796627 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-19 04:47:47.796639 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-19 04:47:47.796651 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:47:47.796662 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:47:47.796673 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:47:47.796684 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:47:47.796695 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:47:47.796705 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:47:47.796716 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:47:47.796727 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-19 04:47:47.796738 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-19 04:47:47.796749 | orchestrator | 2026-03-19 04:47:47.796759 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:47:47.796850 | orchestrator | Thursday 19 March 2026 04:47:42 +0000 (0:00:05.844) 0:11:36.157 ******** 2026-03-19 04:47:47.796863 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796874 | orchestrator | 2026-03-19 04:47:47.796885 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:47:47.796896 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.127) 0:11:36.284 ******** 2026-03-19 04:47:47.796907 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796918 | orchestrator | 2026-03-19 04:47:47.796929 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:47:47.796940 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.118) 0:11:36.403 ******** 2026-03-19 04:47:47.796950 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.796961 | orchestrator | 2026-03-19 04:47:47.796972 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:47:47.796983 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.131) 0:11:36.535 ******** 2026-03-19 04:47:47.796994 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797004 | orchestrator | 2026-03-19 04:47:47.797015 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:47:47.797045 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.133) 0:11:36.669 ******** 2026-03-19 04:47:47.797057 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797067 | orchestrator | 2026-03-19 04:47:47.797078 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:47:47.797089 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.128) 0:11:36.797 ******** 2026-03-19 04:47:47.797099 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797110 | orchestrator | 2026-03-19 04:47:47.797121 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:47:47.797132 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.140) 0:11:36.938 ******** 2026-03-19 04:47:47.797142 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797153 | orchestrator | 2026-03-19 04:47:47.797164 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:47:47.797181 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.131) 0:11:37.070 ******** 2026-03-19 04:47:47.797192 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797203 | orchestrator | 2026-03-19 04:47:47.797214 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:47:47.797246 | orchestrator | Thursday 19 March 2026 04:47:43 +0000 (0:00:00.150) 0:11:37.220 ******** 2026-03-19 04:47:47.797274 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797296 | orchestrator | 2026-03-19 04:47:47.797314 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:47:47.797329 | orchestrator | Thursday 19 March 2026 04:47:44 +0000 (0:00:00.126) 0:11:37.347 ******** 2026-03-19 04:47:47.797345 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797361 | orchestrator | 2026-03-19 04:47:47.797377 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:47:47.797394 | orchestrator | Thursday 19 March 2026 04:47:44 +0000 (0:00:00.377) 0:11:37.724 ******** 2026-03-19 04:47:47.797409 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797427 | orchestrator | 2026-03-19 04:47:47.797443 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:47:47.797459 | orchestrator | Thursday 19 March 2026 04:47:44 +0000 (0:00:00.137) 0:11:37.861 ******** 2026-03-19 04:47:47.797474 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797490 | orchestrator | 2026-03-19 04:47:47.797507 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:47:47.797524 | orchestrator | Thursday 19 March 2026 04:47:44 +0000 (0:00:00.127) 0:11:37.989 ******** 2026-03-19 04:47:47.797541 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797578 | orchestrator | 2026-03-19 04:47:47.797594 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:47:47.797611 | orchestrator | Thursday 19 March 2026 04:47:44 +0000 (0:00:00.236) 0:11:38.226 ******** 2026-03-19 04:47:47.797627 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797643 | orchestrator | 2026-03-19 04:47:47.797660 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:47:47.797677 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.126) 0:11:38.353 ******** 2026-03-19 04:47:47.797693 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797708 | orchestrator | 2026-03-19 04:47:47.797724 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:47:47.797740 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.234) 0:11:38.588 ******** 2026-03-19 04:47:47.797757 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797799 | orchestrator | 2026-03-19 04:47:47.797815 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:47:47.797831 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.133) 0:11:38.722 ******** 2026-03-19 04:47:47.797848 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797865 | orchestrator | 2026-03-19 04:47:47.797882 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:47:47.797900 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.133) 0:11:38.855 ******** 2026-03-19 04:47:47.797917 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797934 | orchestrator | 2026-03-19 04:47:47.797950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:47:47.797966 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.140) 0:11:38.996 ******** 2026-03-19 04:47:47.797982 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.797998 | orchestrator | 2026-03-19 04:47:47.798013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:47:47.798096 | orchestrator | Thursday 19 March 2026 04:47:45 +0000 (0:00:00.151) 0:11:39.147 ******** 2026-03-19 04:47:47.798112 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.798128 | orchestrator | 2026-03-19 04:47:47.798145 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:47:47.798171 | orchestrator | Thursday 19 March 2026 04:47:46 +0000 (0:00:00.127) 0:11:39.275 ******** 2026-03-19 04:47:47.798187 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.798219 | orchestrator | 2026-03-19 04:47:47.798234 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:47:47.798251 | orchestrator | Thursday 19 March 2026 04:47:46 +0000 (0:00:00.134) 0:11:39.410 ******** 2026-03-19 04:47:47.798266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:47:47.798282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:47:47.798299 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:47:47.798315 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:47:47.798330 | orchestrator | 2026-03-19 04:47:47.798346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:47:47.798362 | orchestrator | Thursday 19 March 2026 04:47:46 +0000 (0:00:00.702) 0:11:40.112 ******** 2026-03-19 04:47:47.798378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:47:47.798408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:48:16.338688 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:48:16.338858 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.338875 | orchestrator | 2026-03-19 04:48:16.338888 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:48:16.338900 | orchestrator | Thursday 19 March 2026 04:47:47 +0000 (0:00:00.932) 0:11:41.045 ******** 2026-03-19 04:48:16.338911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-19 04:48:16.338923 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-19 04:48:16.338934 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-19 04:48:16.338945 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.338956 | orchestrator | 2026-03-19 04:48:16.338968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:48:16.338996 | orchestrator | Thursday 19 March 2026 04:47:48 +0000 (0:00:00.397) 0:11:41.443 ******** 2026-03-19 04:48:16.339008 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.339019 | orchestrator | 2026-03-19 04:48:16.339030 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:48:16.339041 | orchestrator | Thursday 19 March 2026 04:47:48 +0000 (0:00:00.147) 0:11:41.590 ******** 2026-03-19 04:48:16.339052 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-19 04:48:16.339064 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.339074 | orchestrator | 2026-03-19 04:48:16.339086 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:48:16.339096 | orchestrator | Thursday 19 March 2026 04:47:48 +0000 (0:00:00.332) 0:11:41.923 ******** 2026-03-19 04:48:16.339108 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.339129 | orchestrator | 2026-03-19 04:48:16.339147 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:48:16.339164 | orchestrator | Thursday 19 March 2026 04:47:49 +0000 (0:00:00.827) 0:11:42.750 ******** 2026-03-19 04:48:16.339190 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:48:16.339215 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-19 04:48:16.339235 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:48:16.339255 | orchestrator | 2026-03-19 04:48:16.339274 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:48:16.339288 | orchestrator | Thursday 19 March 2026 04:47:50 +0000 (0:00:00.626) 0:11:43.376 ******** 2026-03-19 04:48:16.339301 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-19 04:48:16.339313 | orchestrator | 2026-03-19 04:48:16.339326 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-19 04:48:16.339338 | orchestrator | Thursday 19 March 2026 04:47:50 +0000 (0:00:00.231) 0:11:43.608 ******** 2026-03-19 04:48:16.339351 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.339363 | orchestrator | 2026-03-19 04:48:16.339401 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-19 04:48:16.339414 | orchestrator | Thursday 19 March 2026 04:47:50 +0000 (0:00:00.519) 0:11:44.128 ******** 2026-03-19 04:48:16.339427 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.339439 | orchestrator | 2026-03-19 04:48:16.339451 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-19 04:48:16.339464 | orchestrator | Thursday 19 March 2026 04:47:50 +0000 (0:00:00.134) 0:11:44.262 ******** 2026-03-19 04:48:16.339476 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:48:16.339488 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:48:16.339501 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:48:16.339513 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-19 04:48:16.339528 | orchestrator | 2026-03-19 04:48:16.339547 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-19 04:48:16.339565 | orchestrator | Thursday 19 March 2026 04:47:57 +0000 (0:00:06.812) 0:11:51.076 ******** 2026-03-19 04:48:16.339583 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.339603 | orchestrator | 2026-03-19 04:48:16.339621 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-19 04:48:16.339639 | orchestrator | Thursday 19 March 2026 04:47:58 +0000 (0:00:00.451) 0:11:51.527 ******** 2026-03-19 04:48:16.339658 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 04:48:16.339670 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-19 04:48:16.339681 | orchestrator | 2026-03-19 04:48:16.339692 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:48:16.339702 | orchestrator | Thursday 19 March 2026 04:48:00 +0000 (0:00:02.379) 0:11:53.906 ******** 2026-03-19 04:48:16.339713 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-19 04:48:16.339724 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-19 04:48:16.339735 | orchestrator | 2026-03-19 04:48:16.339746 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-19 04:48:16.339786 | orchestrator | Thursday 19 March 2026 04:48:01 +0000 (0:00:01.019) 0:11:54.926 ******** 2026-03-19 04:48:16.339798 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.339808 | orchestrator | 2026-03-19 04:48:16.339819 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-19 04:48:16.339830 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.538) 0:11:55.465 ******** 2026-03-19 04:48:16.339840 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.339851 | orchestrator | 2026-03-19 04:48:16.339862 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:48:16.339872 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.123) 0:11:55.588 ******** 2026-03-19 04:48:16.339883 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.339893 | orchestrator | 2026-03-19 04:48:16.339904 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:48:16.339934 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.127) 0:11:55.716 ******** 2026-03-19 04:48:16.339946 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-19 04:48:16.339956 | orchestrator | 2026-03-19 04:48:16.339967 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-19 04:48:16.339978 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.200) 0:11:55.916 ******** 2026-03-19 04:48:16.339989 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.340000 | orchestrator | 2026-03-19 04:48:16.340011 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-19 04:48:16.340021 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.145) 0:11:56.061 ******** 2026-03-19 04:48:16.340041 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.340060 | orchestrator | 2026-03-19 04:48:16.340099 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-19 04:48:16.340133 | orchestrator | Thursday 19 March 2026 04:48:02 +0000 (0:00:00.142) 0:11:56.204 ******** 2026-03-19 04:48:16.340152 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-19 04:48:16.340172 | orchestrator | 2026-03-19 04:48:16.340188 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-19 04:48:16.340207 | orchestrator | Thursday 19 March 2026 04:48:03 +0000 (0:00:00.205) 0:11:56.409 ******** 2026-03-19 04:48:16.340225 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.340246 | orchestrator | 2026-03-19 04:48:16.340265 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-19 04:48:16.340284 | orchestrator | Thursday 19 March 2026 04:48:04 +0000 (0:00:00.985) 0:11:57.394 ******** 2026-03-19 04:48:16.340303 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.340323 | orchestrator | 2026-03-19 04:48:16.340343 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-19 04:48:16.340362 | orchestrator | Thursday 19 March 2026 04:48:05 +0000 (0:00:01.215) 0:11:58.610 ******** 2026-03-19 04:48:16.340381 | orchestrator | ok: [testbed-node-1] 2026-03-19 04:48:16.340400 | orchestrator | 2026-03-19 04:48:16.340419 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-19 04:48:16.340438 | orchestrator | Thursday 19 March 2026 04:48:06 +0000 (0:00:01.486) 0:12:00.096 ******** 2026-03-19 04:48:16.340457 | orchestrator | changed: [testbed-node-1] 2026-03-19 04:48:16.340477 | orchestrator | 2026-03-19 04:48:16.340497 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:48:16.340515 | orchestrator | Thursday 19 March 2026 04:48:09 +0000 (0:00:02.966) 0:12:03.062 ******** 2026-03-19 04:48:16.340534 | orchestrator | skipping: [testbed-node-1] 2026-03-19 04:48:16.340554 | orchestrator | 2026-03-19 04:48:16.340573 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-19 04:48:16.340590 | orchestrator | 2026-03-19 04:48:16.340609 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-19 04:48:16.340628 | orchestrator | Thursday 19 March 2026 04:48:10 +0000 (0:00:00.604) 0:12:03.667 ******** 2026-03-19 04:48:16.340648 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:48:16.340668 | orchestrator | 2026-03-19 04:48:16.340687 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-19 04:48:16.340708 | orchestrator | Thursday 19 March 2026 04:48:12 +0000 (0:00:01.937) 0:12:05.604 ******** 2026-03-19 04:48:16.340728 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:48:16.340747 | orchestrator | 2026-03-19 04:48:16.340808 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:48:16.340828 | orchestrator | Thursday 19 March 2026 04:48:13 +0000 (0:00:01.587) 0:12:07.192 ******** 2026-03-19 04:48:16.340846 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-19 04:48:16.340864 | orchestrator | 2026-03-19 04:48:16.340881 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:48:16.340899 | orchestrator | Thursday 19 March 2026 04:48:14 +0000 (0:00:00.239) 0:12:07.432 ******** 2026-03-19 04:48:16.340917 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.340934 | orchestrator | 2026-03-19 04:48:16.340952 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:48:16.340969 | orchestrator | Thursday 19 March 2026 04:48:14 +0000 (0:00:00.493) 0:12:07.925 ******** 2026-03-19 04:48:16.340986 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341005 | orchestrator | 2026-03-19 04:48:16.341023 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:48:16.341041 | orchestrator | Thursday 19 March 2026 04:48:14 +0000 (0:00:00.145) 0:12:08.071 ******** 2026-03-19 04:48:16.341059 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341077 | orchestrator | 2026-03-19 04:48:16.341096 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:48:16.341132 | orchestrator | Thursday 19 March 2026 04:48:15 +0000 (0:00:00.529) 0:12:08.601 ******** 2026-03-19 04:48:16.341151 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341168 | orchestrator | 2026-03-19 04:48:16.341186 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:48:16.341204 | orchestrator | Thursday 19 March 2026 04:48:15 +0000 (0:00:00.407) 0:12:09.008 ******** 2026-03-19 04:48:16.341221 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341239 | orchestrator | 2026-03-19 04:48:16.341259 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:48:16.341277 | orchestrator | Thursday 19 March 2026 04:48:15 +0000 (0:00:00.135) 0:12:09.144 ******** 2026-03-19 04:48:16.341296 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341314 | orchestrator | 2026-03-19 04:48:16.341332 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:48:16.341351 | orchestrator | Thursday 19 March 2026 04:48:16 +0000 (0:00:00.154) 0:12:09.298 ******** 2026-03-19 04:48:16.341370 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:16.341389 | orchestrator | 2026-03-19 04:48:16.341409 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:48:16.341428 | orchestrator | Thursday 19 March 2026 04:48:16 +0000 (0:00:00.148) 0:12:09.446 ******** 2026-03-19 04:48:16.341447 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:16.341465 | orchestrator | 2026-03-19 04:48:16.341506 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:48:23.918852 | orchestrator | Thursday 19 March 2026 04:48:16 +0000 (0:00:00.138) 0:12:09.585 ******** 2026-03-19 04:48:23.918963 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:48:23.918979 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:48:23.918991 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:48:23.919003 | orchestrator | 2026-03-19 04:48:23.919014 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:48:23.919025 | orchestrator | Thursday 19 March 2026 04:48:16 +0000 (0:00:00.637) 0:12:10.223 ******** 2026-03-19 04:48:23.919036 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:23.919047 | orchestrator | 2026-03-19 04:48:23.919074 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:48:23.919086 | orchestrator | Thursday 19 March 2026 04:48:17 +0000 (0:00:00.249) 0:12:10.472 ******** 2026-03-19 04:48:23.919095 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:48:23.919101 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:48:23.919108 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:48:23.919114 | orchestrator | 2026-03-19 04:48:23.919120 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:48:23.919127 | orchestrator | Thursday 19 March 2026 04:48:19 +0000 (0:00:01.796) 0:12:12.269 ******** 2026-03-19 04:48:23.919133 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:48:23.919140 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:48:23.919146 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:48:23.919153 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919159 | orchestrator | 2026-03-19 04:48:23.919165 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:48:23.919172 | orchestrator | Thursday 19 March 2026 04:48:19 +0000 (0:00:00.400) 0:12:12.669 ******** 2026-03-19 04:48:23.919180 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919189 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919234 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919244 | orchestrator | 2026-03-19 04:48:23.919255 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:48:23.919264 | orchestrator | Thursday 19 March 2026 04:48:20 +0000 (0:00:00.936) 0:12:13.606 ******** 2026-03-19 04:48:23.919276 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919289 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919301 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:23.919311 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919322 | orchestrator | 2026-03-19 04:48:23.919333 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:48:23.919344 | orchestrator | Thursday 19 March 2026 04:48:20 +0000 (0:00:00.185) 0:12:13.791 ******** 2026-03-19 04:48:23.919375 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:48:17.713756', 'end': '2026-03-19 04:48:17.769913', 'delta': '0:00:00.056157', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:48:23.919392 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:48:18.273167', 'end': '2026-03-19 04:48:18.321684', 'delta': '0:00:00.048517', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:48:23.919400 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:48:18.812945', 'end': '2026-03-19 04:48:18.866491', 'delta': '0:00:00.053546', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:48:23.919415 | orchestrator | 2026-03-19 04:48:23.919423 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:48:23.919430 | orchestrator | Thursday 19 March 2026 04:48:20 +0000 (0:00:00.199) 0:12:13.991 ******** 2026-03-19 04:48:23.919437 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:23.919444 | orchestrator | 2026-03-19 04:48:23.919451 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:48:23.919459 | orchestrator | Thursday 19 March 2026 04:48:20 +0000 (0:00:00.251) 0:12:14.242 ******** 2026-03-19 04:48:23.919466 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919473 | orchestrator | 2026-03-19 04:48:23.919480 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:48:23.919487 | orchestrator | Thursday 19 March 2026 04:48:21 +0000 (0:00:00.852) 0:12:15.094 ******** 2026-03-19 04:48:23.919494 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:23.919501 | orchestrator | 2026-03-19 04:48:23.919508 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:48:23.919515 | orchestrator | Thursday 19 March 2026 04:48:21 +0000 (0:00:00.140) 0:12:15.235 ******** 2026-03-19 04:48:23.919522 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:48:23.919529 | orchestrator | 2026-03-19 04:48:23.919536 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:48:23.919544 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:01.049) 0:12:16.284 ******** 2026-03-19 04:48:23.919551 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:23.919558 | orchestrator | 2026-03-19 04:48:23.919565 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:48:23.919572 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.163) 0:12:16.447 ******** 2026-03-19 04:48:23.919579 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919587 | orchestrator | 2026-03-19 04:48:23.919594 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:48:23.919601 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.113) 0:12:16.561 ******** 2026-03-19 04:48:23.919608 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919615 | orchestrator | 2026-03-19 04:48:23.919623 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:48:23.919630 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.214) 0:12:16.775 ******** 2026-03-19 04:48:23.919637 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919647 | orchestrator | 2026-03-19 04:48:23.919657 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:48:23.919667 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.111) 0:12:16.887 ******** 2026-03-19 04:48:23.919678 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919688 | orchestrator | 2026-03-19 04:48:23.919698 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:48:23.919709 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.161) 0:12:17.049 ******** 2026-03-19 04:48:23.919719 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:23.919730 | orchestrator | 2026-03-19 04:48:23.919775 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:48:25.198262 | orchestrator | Thursday 19 March 2026 04:48:23 +0000 (0:00:00.128) 0:12:17.177 ******** 2026-03-19 04:48:25.198375 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:25.198384 | orchestrator | 2026-03-19 04:48:25.198390 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:48:25.198396 | orchestrator | Thursday 19 March 2026 04:48:24 +0000 (0:00:00.124) 0:12:17.301 ******** 2026-03-19 04:48:25.198401 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:25.198405 | orchestrator | 2026-03-19 04:48:25.198410 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:48:25.198415 | orchestrator | Thursday 19 March 2026 04:48:24 +0000 (0:00:00.113) 0:12:17.415 ******** 2026-03-19 04:48:25.198421 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:25.198425 | orchestrator | 2026-03-19 04:48:25.198440 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:48:25.198446 | orchestrator | Thursday 19 March 2026 04:48:24 +0000 (0:00:00.130) 0:12:17.545 ******** 2026-03-19 04:48:25.198451 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:25.198455 | orchestrator | 2026-03-19 04:48:25.198460 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:48:25.198465 | orchestrator | Thursday 19 March 2026 04:48:24 +0000 (0:00:00.155) 0:12:17.701 ******** 2026-03-19 04:48:25.198472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:48:25.198498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:48:25.198543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:48:25.198553 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:25.198558 | orchestrator | 2026-03-19 04:48:25.198563 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:48:25.198568 | orchestrator | Thursday 19 March 2026 04:48:24 +0000 (0:00:00.240) 0:12:17.941 ******** 2026-03-19 04:48:25.198574 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:25.198590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886314 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-57-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886443 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886456 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886467 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886534 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8266a944', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_8266a944-9a5f-4e36-bd18-89fd67130cb1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886550 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:48:26.886584 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:26.886597 | orchestrator | 2026-03-19 04:48:26.886609 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:48:26.886621 | orchestrator | Thursday 19 March 2026 04:48:25 +0000 (0:00:00.516) 0:12:18.457 ******** 2026-03-19 04:48:26.886632 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:26.886643 | orchestrator | 2026-03-19 04:48:26.886654 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:48:26.886665 | orchestrator | Thursday 19 March 2026 04:48:25 +0000 (0:00:00.489) 0:12:18.947 ******** 2026-03-19 04:48:26.886683 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:26.886702 | orchestrator | 2026-03-19 04:48:26.886728 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:48:26.886780 | orchestrator | Thursday 19 March 2026 04:48:25 +0000 (0:00:00.138) 0:12:19.086 ******** 2026-03-19 04:48:26.886800 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:26.886833 | orchestrator | 2026-03-19 04:48:26.886853 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:48:26.886871 | orchestrator | Thursday 19 March 2026 04:48:26 +0000 (0:00:00.539) 0:12:19.626 ******** 2026-03-19 04:48:26.886889 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:26.886908 | orchestrator | 2026-03-19 04:48:26.886925 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:48:26.886944 | orchestrator | Thursday 19 March 2026 04:48:26 +0000 (0:00:00.155) 0:12:19.782 ******** 2026-03-19 04:48:26.886962 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:26.886980 | orchestrator | 2026-03-19 04:48:26.886998 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:48:26.887019 | orchestrator | Thursday 19 March 2026 04:48:26 +0000 (0:00:00.210) 0:12:19.993 ******** 2026-03-19 04:48:26.887038 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:26.887055 | orchestrator | 2026-03-19 04:48:26.887069 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:48:26.887093 | orchestrator | Thursday 19 March 2026 04:48:26 +0000 (0:00:00.149) 0:12:20.143 ******** 2026-03-19 04:48:37.177189 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-19 04:48:37.177315 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-19 04:48:37.177332 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:48:37.177345 | orchestrator | 2026-03-19 04:48:37.177357 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:48:37.177382 | orchestrator | Thursday 19 March 2026 04:48:27 +0000 (0:00:00.692) 0:12:20.835 ******** 2026-03-19 04:48:37.177394 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-19 04:48:37.177405 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-19 04:48:37.177416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-19 04:48:37.177427 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.177438 | orchestrator | 2026-03-19 04:48:37.177449 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:48:37.177459 | orchestrator | Thursday 19 March 2026 04:48:27 +0000 (0:00:00.168) 0:12:21.003 ******** 2026-03-19 04:48:37.177470 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.177481 | orchestrator | 2026-03-19 04:48:37.177492 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:48:37.177503 | orchestrator | Thursday 19 March 2026 04:48:27 +0000 (0:00:00.142) 0:12:21.146 ******** 2026-03-19 04:48:37.177520 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:48:37.177539 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:48:37.177558 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:48:37.177577 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:48:37.177596 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:48:37.177638 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:48:37.177657 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:48:37.177677 | orchestrator | 2026-03-19 04:48:37.177697 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:48:37.177717 | orchestrator | Thursday 19 March 2026 04:48:28 +0000 (0:00:01.078) 0:12:22.224 ******** 2026-03-19 04:48:37.177735 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:48:37.177776 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:48:37.177794 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:48:37.177812 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:48:37.177832 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:48:37.177851 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:48:37.177870 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:48:37.177888 | orchestrator | 2026-03-19 04:48:37.177907 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:48:37.177926 | orchestrator | Thursday 19 March 2026 04:48:30 +0000 (0:00:01.621) 0:12:23.845 ******** 2026-03-19 04:48:37.177943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-19 04:48:37.177961 | orchestrator | 2026-03-19 04:48:37.177979 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:48:37.177998 | orchestrator | Thursday 19 March 2026 04:48:31 +0000 (0:00:00.477) 0:12:24.323 ******** 2026-03-19 04:48:37.178079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-19 04:48:37.178093 | orchestrator | 2026-03-19 04:48:37.178104 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:48:37.178115 | orchestrator | Thursday 19 March 2026 04:48:31 +0000 (0:00:00.230) 0:12:24.554 ******** 2026-03-19 04:48:37.178125 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178136 | orchestrator | 2026-03-19 04:48:37.178146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:48:37.178157 | orchestrator | Thursday 19 March 2026 04:48:31 +0000 (0:00:00.579) 0:12:25.134 ******** 2026-03-19 04:48:37.178168 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178178 | orchestrator | 2026-03-19 04:48:37.178189 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:48:37.178199 | orchestrator | Thursday 19 March 2026 04:48:32 +0000 (0:00:00.134) 0:12:25.268 ******** 2026-03-19 04:48:37.178210 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178220 | orchestrator | 2026-03-19 04:48:37.178231 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:48:37.178242 | orchestrator | Thursday 19 March 2026 04:48:32 +0000 (0:00:00.123) 0:12:25.391 ******** 2026-03-19 04:48:37.178253 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178263 | orchestrator | 2026-03-19 04:48:37.178274 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:48:37.178285 | orchestrator | Thursday 19 March 2026 04:48:32 +0000 (0:00:00.132) 0:12:25.524 ******** 2026-03-19 04:48:37.178296 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178306 | orchestrator | 2026-03-19 04:48:37.178317 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:48:37.178327 | orchestrator | Thursday 19 March 2026 04:48:32 +0000 (0:00:00.561) 0:12:26.085 ******** 2026-03-19 04:48:37.178338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178348 | orchestrator | 2026-03-19 04:48:37.178359 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:48:37.178400 | orchestrator | Thursday 19 March 2026 04:48:32 +0000 (0:00:00.129) 0:12:26.215 ******** 2026-03-19 04:48:37.178412 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178423 | orchestrator | 2026-03-19 04:48:37.178434 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:48:37.178452 | orchestrator | Thursday 19 March 2026 04:48:33 +0000 (0:00:00.144) 0:12:26.360 ******** 2026-03-19 04:48:37.178463 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178474 | orchestrator | 2026-03-19 04:48:37.178485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:48:37.178496 | orchestrator | Thursday 19 March 2026 04:48:33 +0000 (0:00:00.537) 0:12:26.898 ******** 2026-03-19 04:48:37.178506 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178517 | orchestrator | 2026-03-19 04:48:37.178528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:48:37.178538 | orchestrator | Thursday 19 March 2026 04:48:34 +0000 (0:00:00.554) 0:12:27.452 ******** 2026-03-19 04:48:37.178549 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178560 | orchestrator | 2026-03-19 04:48:37.178570 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:48:37.178581 | orchestrator | Thursday 19 March 2026 04:48:34 +0000 (0:00:00.365) 0:12:27.818 ******** 2026-03-19 04:48:37.178592 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178603 | orchestrator | 2026-03-19 04:48:37.178613 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:48:37.178624 | orchestrator | Thursday 19 March 2026 04:48:34 +0000 (0:00:00.141) 0:12:27.960 ******** 2026-03-19 04:48:37.178635 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178646 | orchestrator | 2026-03-19 04:48:37.178656 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:48:37.178667 | orchestrator | Thursday 19 March 2026 04:48:34 +0000 (0:00:00.138) 0:12:28.098 ******** 2026-03-19 04:48:37.178677 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178688 | orchestrator | 2026-03-19 04:48:37.178699 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:48:37.178710 | orchestrator | Thursday 19 March 2026 04:48:34 +0000 (0:00:00.127) 0:12:28.226 ******** 2026-03-19 04:48:37.178721 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178731 | orchestrator | 2026-03-19 04:48:37.178794 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:48:37.178817 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.132) 0:12:28.358 ******** 2026-03-19 04:48:37.178836 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178854 | orchestrator | 2026-03-19 04:48:37.178871 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:48:37.178890 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.116) 0:12:28.474 ******** 2026-03-19 04:48:37.178908 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.178926 | orchestrator | 2026-03-19 04:48:37.178943 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:48:37.178962 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.139) 0:12:28.614 ******** 2026-03-19 04:48:37.178980 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.178998 | orchestrator | 2026-03-19 04:48:37.179014 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:48:37.179031 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.158) 0:12:28.773 ******** 2026-03-19 04:48:37.179048 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.179066 | orchestrator | 2026-03-19 04:48:37.179084 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:48:37.179102 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.154) 0:12:28.927 ******** 2026-03-19 04:48:37.179118 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:37.179134 | orchestrator | 2026-03-19 04:48:37.179150 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:48:37.179180 | orchestrator | Thursday 19 March 2026 04:48:35 +0000 (0:00:00.206) 0:12:29.134 ******** 2026-03-19 04:48:37.179199 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179216 | orchestrator | 2026-03-19 04:48:37.179231 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:48:37.179248 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.131) 0:12:29.265 ******** 2026-03-19 04:48:37.179264 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179281 | orchestrator | 2026-03-19 04:48:37.179299 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:48:37.179316 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.133) 0:12:29.398 ******** 2026-03-19 04:48:37.179333 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179350 | orchestrator | 2026-03-19 04:48:37.179365 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:48:37.179381 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.384) 0:12:29.783 ******** 2026-03-19 04:48:37.179399 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179416 | orchestrator | 2026-03-19 04:48:37.179433 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:48:37.179451 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.132) 0:12:29.916 ******** 2026-03-19 04:48:37.179467 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179483 | orchestrator | 2026-03-19 04:48:37.179500 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:48:37.179517 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.123) 0:12:30.040 ******** 2026-03-19 04:48:37.179532 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179549 | orchestrator | 2026-03-19 04:48:37.179565 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:48:37.179583 | orchestrator | Thursday 19 March 2026 04:48:36 +0000 (0:00:00.136) 0:12:30.176 ******** 2026-03-19 04:48:37.179600 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:37.179617 | orchestrator | 2026-03-19 04:48:37.179634 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:48:37.179652 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.140) 0:12:30.317 ******** 2026-03-19 04:48:37.179686 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.248511 | orchestrator | 2026-03-19 04:48:54.248629 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:48:54.248645 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.117) 0:12:30.434 ******** 2026-03-19 04:48:54.248658 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.248670 | orchestrator | 2026-03-19 04:48:54.248697 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:48:54.248709 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.125) 0:12:30.559 ******** 2026-03-19 04:48:54.248720 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.248844 | orchestrator | 2026-03-19 04:48:54.248862 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:48:54.248873 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.122) 0:12:30.682 ******** 2026-03-19 04:48:54.248884 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.248895 | orchestrator | 2026-03-19 04:48:54.248906 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:48:54.248918 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.116) 0:12:30.799 ******** 2026-03-19 04:48:54.248929 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.248940 | orchestrator | 2026-03-19 04:48:54.248951 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:48:54.248961 | orchestrator | Thursday 19 March 2026 04:48:37 +0000 (0:00:00.192) 0:12:30.991 ******** 2026-03-19 04:48:54.248972 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.248984 | orchestrator | 2026-03-19 04:48:54.249024 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:48:54.249035 | orchestrator | Thursday 19 March 2026 04:48:38 +0000 (0:00:00.825) 0:12:31.817 ******** 2026-03-19 04:48:54.249052 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.249072 | orchestrator | 2026-03-19 04:48:54.249090 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:48:54.249110 | orchestrator | Thursday 19 March 2026 04:48:39 +0000 (0:00:01.303) 0:12:33.121 ******** 2026-03-19 04:48:54.249131 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-19 04:48:54.249152 | orchestrator | 2026-03-19 04:48:54.249171 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:48:54.249184 | orchestrator | Thursday 19 March 2026 04:48:40 +0000 (0:00:00.482) 0:12:33.604 ******** 2026-03-19 04:48:54.249197 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249209 | orchestrator | 2026-03-19 04:48:54.249221 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:48:54.249234 | orchestrator | Thursday 19 March 2026 04:48:40 +0000 (0:00:00.130) 0:12:33.735 ******** 2026-03-19 04:48:54.249246 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249259 | orchestrator | 2026-03-19 04:48:54.249271 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:48:54.249284 | orchestrator | Thursday 19 March 2026 04:48:40 +0000 (0:00:00.141) 0:12:33.876 ******** 2026-03-19 04:48:54.249296 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:48:54.249308 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:48:54.249321 | orchestrator | 2026-03-19 04:48:54.249334 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:48:54.249347 | orchestrator | Thursday 19 March 2026 04:48:41 +0000 (0:00:00.858) 0:12:34.735 ******** 2026-03-19 04:48:54.249358 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.249371 | orchestrator | 2026-03-19 04:48:54.249384 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:48:54.249397 | orchestrator | Thursday 19 March 2026 04:48:41 +0000 (0:00:00.452) 0:12:35.188 ******** 2026-03-19 04:48:54.249410 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249423 | orchestrator | 2026-03-19 04:48:54.249436 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:48:54.249446 | orchestrator | Thursday 19 March 2026 04:48:42 +0000 (0:00:00.148) 0:12:35.336 ******** 2026-03-19 04:48:54.249457 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249468 | orchestrator | 2026-03-19 04:48:54.249479 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:48:54.249490 | orchestrator | Thursday 19 March 2026 04:48:42 +0000 (0:00:00.133) 0:12:35.469 ******** 2026-03-19 04:48:54.249500 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249511 | orchestrator | 2026-03-19 04:48:54.249522 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:48:54.249532 | orchestrator | Thursday 19 March 2026 04:48:42 +0000 (0:00:00.123) 0:12:35.593 ******** 2026-03-19 04:48:54.249543 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-19 04:48:54.249554 | orchestrator | 2026-03-19 04:48:54.249565 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:48:54.249575 | orchestrator | Thursday 19 March 2026 04:48:42 +0000 (0:00:00.213) 0:12:35.806 ******** 2026-03-19 04:48:54.249586 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.249597 | orchestrator | 2026-03-19 04:48:54.249608 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:48:54.249618 | orchestrator | Thursday 19 March 2026 04:48:43 +0000 (0:00:00.713) 0:12:36.519 ******** 2026-03-19 04:48:54.249629 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:48:54.249649 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:48:54.249660 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:48:54.249671 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249681 | orchestrator | 2026-03-19 04:48:54.249692 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:48:54.249704 | orchestrator | Thursday 19 March 2026 04:48:43 +0000 (0:00:00.147) 0:12:36.667 ******** 2026-03-19 04:48:54.249760 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249775 | orchestrator | 2026-03-19 04:48:54.249786 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:48:54.249796 | orchestrator | Thursday 19 March 2026 04:48:43 +0000 (0:00:00.109) 0:12:36.777 ******** 2026-03-19 04:48:54.249807 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249818 | orchestrator | 2026-03-19 04:48:54.249836 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:48:54.249847 | orchestrator | Thursday 19 March 2026 04:48:43 +0000 (0:00:00.441) 0:12:37.218 ******** 2026-03-19 04:48:54.249858 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249869 | orchestrator | 2026-03-19 04:48:54.249880 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:48:54.249891 | orchestrator | Thursday 19 March 2026 04:48:44 +0000 (0:00:00.151) 0:12:37.370 ******** 2026-03-19 04:48:54.249901 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249912 | orchestrator | 2026-03-19 04:48:54.249923 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:48:54.249934 | orchestrator | Thursday 19 March 2026 04:48:44 +0000 (0:00:00.151) 0:12:37.521 ******** 2026-03-19 04:48:54.249944 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.249955 | orchestrator | 2026-03-19 04:48:54.249966 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:48:54.249977 | orchestrator | Thursday 19 March 2026 04:48:44 +0000 (0:00:00.151) 0:12:37.672 ******** 2026-03-19 04:48:54.249987 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.249998 | orchestrator | 2026-03-19 04:48:54.250009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:48:54.250091 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:01.638) 0:12:39.310 ******** 2026-03-19 04:48:54.250103 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.250114 | orchestrator | 2026-03-19 04:48:54.250125 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:48:54.250136 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.144) 0:12:39.455 ******** 2026-03-19 04:48:54.250147 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-19 04:48:54.250158 | orchestrator | 2026-03-19 04:48:54.250169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:48:54.250179 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.221) 0:12:39.677 ******** 2026-03-19 04:48:54.250190 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250201 | orchestrator | 2026-03-19 04:48:54.250212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:48:54.250223 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.143) 0:12:39.820 ******** 2026-03-19 04:48:54.250234 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250244 | orchestrator | 2026-03-19 04:48:54.250255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:48:54.250266 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.148) 0:12:39.968 ******** 2026-03-19 04:48:54.250277 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250288 | orchestrator | 2026-03-19 04:48:54.250299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:48:54.250309 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.148) 0:12:40.117 ******** 2026-03-19 04:48:54.250320 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250339 | orchestrator | 2026-03-19 04:48:54.250350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:48:54.250361 | orchestrator | Thursday 19 March 2026 04:48:46 +0000 (0:00:00.133) 0:12:40.251 ******** 2026-03-19 04:48:54.250372 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250382 | orchestrator | 2026-03-19 04:48:54.250393 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:48:54.250404 | orchestrator | Thursday 19 March 2026 04:48:47 +0000 (0:00:00.155) 0:12:40.406 ******** 2026-03-19 04:48:54.250415 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250425 | orchestrator | 2026-03-19 04:48:54.250436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:48:54.250447 | orchestrator | Thursday 19 March 2026 04:48:47 +0000 (0:00:00.397) 0:12:40.804 ******** 2026-03-19 04:48:54.250458 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250469 | orchestrator | 2026-03-19 04:48:54.250480 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:48:54.250491 | orchestrator | Thursday 19 March 2026 04:48:47 +0000 (0:00:00.140) 0:12:40.944 ******** 2026-03-19 04:48:54.250501 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:48:54.250512 | orchestrator | 2026-03-19 04:48:54.250523 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:48:54.250534 | orchestrator | Thursday 19 March 2026 04:48:47 +0000 (0:00:00.158) 0:12:41.102 ******** 2026-03-19 04:48:54.250544 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:48:54.250555 | orchestrator | 2026-03-19 04:48:54.250566 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:48:54.250577 | orchestrator | Thursday 19 March 2026 04:48:48 +0000 (0:00:00.228) 0:12:41.330 ******** 2026-03-19 04:48:54.250587 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-19 04:48:54.250598 | orchestrator | 2026-03-19 04:48:54.250609 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:48:54.250620 | orchestrator | Thursday 19 March 2026 04:48:48 +0000 (0:00:00.220) 0:12:41.551 ******** 2026-03-19 04:48:54.250631 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-19 04:48:54.250642 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-19 04:48:54.250653 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-19 04:48:54.250664 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-19 04:48:54.250675 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-19 04:48:54.250686 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-19 04:48:54.250704 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-19 04:49:09.186945 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:49:09.187052 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:49:09.187063 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:49:09.187084 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:49:09.187092 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:49:09.187099 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:49:09.187106 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:49:09.187113 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-19 04:49:09.187121 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-19 04:49:09.187128 | orchestrator | 2026-03-19 04:49:09.187135 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:49:09.187142 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:05.944) 0:12:47.495 ******** 2026-03-19 04:49:09.187149 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187156 | orchestrator | 2026-03-19 04:49:09.187163 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:49:09.187188 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:00.133) 0:12:47.629 ******** 2026-03-19 04:49:09.187195 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187202 | orchestrator | 2026-03-19 04:49:09.187209 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:49:09.187215 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:00.130) 0:12:47.759 ******** 2026-03-19 04:49:09.187222 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187229 | orchestrator | 2026-03-19 04:49:09.187235 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:49:09.187242 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:00.115) 0:12:47.875 ******** 2026-03-19 04:49:09.187249 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187255 | orchestrator | 2026-03-19 04:49:09.187262 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:49:09.187269 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:00.125) 0:12:48.000 ******** 2026-03-19 04:49:09.187275 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187282 | orchestrator | 2026-03-19 04:49:09.187288 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:49:09.187295 | orchestrator | Thursday 19 March 2026 04:48:54 +0000 (0:00:00.120) 0:12:48.121 ******** 2026-03-19 04:49:09.187302 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187308 | orchestrator | 2026-03-19 04:49:09.187315 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:49:09.187322 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.395) 0:12:48.516 ******** 2026-03-19 04:49:09.187329 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187336 | orchestrator | 2026-03-19 04:49:09.187342 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:49:09.187349 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.135) 0:12:48.652 ******** 2026-03-19 04:49:09.187356 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187363 | orchestrator | 2026-03-19 04:49:09.187369 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:49:09.187376 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.128) 0:12:48.780 ******** 2026-03-19 04:49:09.187383 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187390 | orchestrator | 2026-03-19 04:49:09.187396 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:49:09.187403 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.113) 0:12:48.893 ******** 2026-03-19 04:49:09.187410 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187416 | orchestrator | 2026-03-19 04:49:09.187427 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:49:09.187438 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.152) 0:12:49.046 ******** 2026-03-19 04:49:09.187449 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187459 | orchestrator | 2026-03-19 04:49:09.187470 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:49:09.187480 | orchestrator | Thursday 19 March 2026 04:48:55 +0000 (0:00:00.150) 0:12:49.196 ******** 2026-03-19 04:49:09.187491 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187502 | orchestrator | 2026-03-19 04:49:09.187513 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:49:09.187522 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.121) 0:12:49.317 ******** 2026-03-19 04:49:09.187532 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187542 | orchestrator | 2026-03-19 04:49:09.187554 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:49:09.187565 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.224) 0:12:49.542 ******** 2026-03-19 04:49:09.187585 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187597 | orchestrator | 2026-03-19 04:49:09.187608 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:49:09.187620 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.135) 0:12:49.678 ******** 2026-03-19 04:49:09.187633 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187644 | orchestrator | 2026-03-19 04:49:09.187654 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:49:09.187665 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.206) 0:12:49.884 ******** 2026-03-19 04:49:09.187677 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187687 | orchestrator | 2026-03-19 04:49:09.187698 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:49:09.187709 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.125) 0:12:50.010 ******** 2026-03-19 04:49:09.187792 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187807 | orchestrator | 2026-03-19 04:49:09.187818 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:49:09.187839 | orchestrator | Thursday 19 March 2026 04:48:56 +0000 (0:00:00.133) 0:12:50.143 ******** 2026-03-19 04:49:09.187850 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187861 | orchestrator | 2026-03-19 04:49:09.187872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:49:09.187883 | orchestrator | Thursday 19 March 2026 04:48:57 +0000 (0:00:00.142) 0:12:50.286 ******** 2026-03-19 04:49:09.187893 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187903 | orchestrator | 2026-03-19 04:49:09.187913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:49:09.187923 | orchestrator | Thursday 19 March 2026 04:48:57 +0000 (0:00:00.403) 0:12:50.689 ******** 2026-03-19 04:49:09.187934 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187945 | orchestrator | 2026-03-19 04:49:09.187955 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:49:09.187966 | orchestrator | Thursday 19 March 2026 04:48:57 +0000 (0:00:00.126) 0:12:50.816 ******** 2026-03-19 04:49:09.187976 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.187986 | orchestrator | 2026-03-19 04:49:09.187996 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:49:09.188006 | orchestrator | Thursday 19 March 2026 04:48:57 +0000 (0:00:00.150) 0:12:50.966 ******** 2026-03-19 04:49:09.188016 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:49:09.188027 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:49:09.188037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:49:09.188048 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188058 | orchestrator | 2026-03-19 04:49:09.188068 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:49:09.188080 | orchestrator | Thursday 19 March 2026 04:48:58 +0000 (0:00:00.415) 0:12:51.382 ******** 2026-03-19 04:49:09.188091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:49:09.188101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:49:09.188111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:49:09.188121 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188131 | orchestrator | 2026-03-19 04:49:09.188141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:49:09.188151 | orchestrator | Thursday 19 March 2026 04:48:58 +0000 (0:00:00.438) 0:12:51.820 ******** 2026-03-19 04:49:09.188162 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-19 04:49:09.188172 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-19 04:49:09.188183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-19 04:49:09.188194 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188216 | orchestrator | 2026-03-19 04:49:09.188228 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:49:09.188240 | orchestrator | Thursday 19 March 2026 04:48:58 +0000 (0:00:00.395) 0:12:52.215 ******** 2026-03-19 04:49:09.188252 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188265 | orchestrator | 2026-03-19 04:49:09.188276 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:49:09.188288 | orchestrator | Thursday 19 March 2026 04:48:59 +0000 (0:00:00.136) 0:12:52.351 ******** 2026-03-19 04:49:09.188316 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-19 04:49:09.188338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188350 | orchestrator | 2026-03-19 04:49:09.188362 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:49:09.188375 | orchestrator | Thursday 19 March 2026 04:48:59 +0000 (0:00:00.319) 0:12:52.671 ******** 2026-03-19 04:49:09.188386 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:09.188397 | orchestrator | 2026-03-19 04:49:09.188408 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:49:09.188417 | orchestrator | Thursday 19 March 2026 04:49:00 +0000 (0:00:00.850) 0:12:53.521 ******** 2026-03-19 04:49:09.188429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:49:09.188441 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:49:09.188453 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-19 04:49:09.188465 | orchestrator | 2026-03-19 04:49:09.188476 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-19 04:49:09.188486 | orchestrator | Thursday 19 March 2026 04:49:01 +0000 (0:00:00.924) 0:12:54.445 ******** 2026-03-19 04:49:09.188497 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-19 04:49:09.188508 | orchestrator | 2026-03-19 04:49:09.188519 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-19 04:49:09.188530 | orchestrator | Thursday 19 March 2026 04:49:01 +0000 (0:00:00.195) 0:12:54.641 ******** 2026-03-19 04:49:09.188541 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:09.188552 | orchestrator | 2026-03-19 04:49:09.188564 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-19 04:49:09.188574 | orchestrator | Thursday 19 March 2026 04:49:02 +0000 (0:00:01.061) 0:12:55.702 ******** 2026-03-19 04:49:09.188586 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:09.188597 | orchestrator | 2026-03-19 04:49:09.188609 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-19 04:49:09.188620 | orchestrator | Thursday 19 March 2026 04:49:02 +0000 (0:00:00.141) 0:12:55.843 ******** 2026-03-19 04:49:09.188631 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:49:09.188642 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:49:09.188667 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:49:33.755639 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-19 04:49:33.755786 | orchestrator | 2026-03-19 04:49:33.755803 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-19 04:49:33.755824 | orchestrator | Thursday 19 March 2026 04:49:09 +0000 (0:00:06.594) 0:13:02.438 ******** 2026-03-19 04:49:33.755833 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.755843 | orchestrator | 2026-03-19 04:49:33.755852 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-19 04:49:33.755861 | orchestrator | Thursday 19 March 2026 04:49:09 +0000 (0:00:00.188) 0:13:02.626 ******** 2026-03-19 04:49:33.755870 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 04:49:33.755879 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-19 04:49:33.755888 | orchestrator | 2026-03-19 04:49:33.755896 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:49:33.755921 | orchestrator | Thursday 19 March 2026 04:49:11 +0000 (0:00:02.210) 0:13:04.837 ******** 2026-03-19 04:49:33.755931 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-19 04:49:33.755939 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-19 04:49:33.755948 | orchestrator | 2026-03-19 04:49:33.755957 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-19 04:49:33.755966 | orchestrator | Thursday 19 March 2026 04:49:12 +0000 (0:00:01.034) 0:13:05.871 ******** 2026-03-19 04:49:33.755974 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.755983 | orchestrator | 2026-03-19 04:49:33.755992 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-19 04:49:33.756001 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.511) 0:13:06.382 ******** 2026-03-19 04:49:33.756010 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756019 | orchestrator | 2026-03-19 04:49:33.756027 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-19 04:49:33.756036 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.129) 0:13:06.511 ******** 2026-03-19 04:49:33.756045 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756054 | orchestrator | 2026-03-19 04:49:33.756062 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-19 04:49:33.756071 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.120) 0:13:06.631 ******** 2026-03-19 04:49:33.756080 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-19 04:49:33.756089 | orchestrator | 2026-03-19 04:49:33.756098 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-19 04:49:33.756107 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.188) 0:13:06.820 ******** 2026-03-19 04:49:33.756115 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756124 | orchestrator | 2026-03-19 04:49:33.756133 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-19 04:49:33.756142 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.141) 0:13:06.961 ******** 2026-03-19 04:49:33.756151 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756159 | orchestrator | 2026-03-19 04:49:33.756168 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-19 04:49:33.756177 | orchestrator | Thursday 19 March 2026 04:49:13 +0000 (0:00:00.137) 0:13:07.098 ******** 2026-03-19 04:49:33.756186 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-19 04:49:33.756194 | orchestrator | 2026-03-19 04:49:33.756203 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-19 04:49:33.756212 | orchestrator | Thursday 19 March 2026 04:49:14 +0000 (0:00:00.489) 0:13:07.588 ******** 2026-03-19 04:49:33.756220 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.756229 | orchestrator | 2026-03-19 04:49:33.756237 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-19 04:49:33.756246 | orchestrator | Thursday 19 March 2026 04:49:15 +0000 (0:00:01.109) 0:13:08.697 ******** 2026-03-19 04:49:33.756255 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.756263 | orchestrator | 2026-03-19 04:49:33.756272 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-19 04:49:33.756280 | orchestrator | Thursday 19 March 2026 04:49:16 +0000 (0:00:00.960) 0:13:09.657 ******** 2026-03-19 04:49:33.756289 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.756298 | orchestrator | 2026-03-19 04:49:33.756306 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-19 04:49:33.756315 | orchestrator | Thursday 19 March 2026 04:49:17 +0000 (0:00:01.462) 0:13:11.120 ******** 2026-03-19 04:49:33.756323 | orchestrator | changed: [testbed-node-2] 2026-03-19 04:49:33.756332 | orchestrator | 2026-03-19 04:49:33.756340 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-19 04:49:33.756349 | orchestrator | Thursday 19 March 2026 04:49:20 +0000 (0:00:02.837) 0:13:13.958 ******** 2026-03-19 04:49:33.756364 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-19 04:49:33.756372 | orchestrator | 2026-03-19 04:49:33.756381 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-19 04:49:33.756390 | orchestrator | Thursday 19 March 2026 04:49:21 +0000 (0:00:00.623) 0:13:14.582 ******** 2026-03-19 04:49:33.756398 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:49:33.756407 | orchestrator | 2026-03-19 04:49:33.756416 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-19 04:49:33.756424 | orchestrator | Thursday 19 March 2026 04:49:22 +0000 (0:00:01.502) 0:13:16.084 ******** 2026-03-19 04:49:33.756433 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:49:33.756441 | orchestrator | 2026-03-19 04:49:33.756450 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-19 04:49:33.756459 | orchestrator | Thursday 19 March 2026 04:49:24 +0000 (0:00:01.357) 0:13:17.441 ******** 2026-03-19 04:49:33.756467 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.756476 | orchestrator | 2026-03-19 04:49:33.756485 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-19 04:49:33.756507 | orchestrator | Thursday 19 March 2026 04:49:24 +0000 (0:00:00.284) 0:13:17.725 ******** 2026-03-19 04:49:33.756516 | orchestrator | ok: [testbed-node-2] 2026-03-19 04:49:33.756525 | orchestrator | 2026-03-19 04:49:33.756534 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-19 04:49:33.756547 | orchestrator | Thursday 19 March 2026 04:49:24 +0000 (0:00:00.156) 0:13:17.882 ******** 2026-03-19 04:49:33.756556 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-19 04:49:33.756564 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-19 04:49:33.756573 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756582 | orchestrator | 2026-03-19 04:49:33.756590 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-19 04:49:33.756599 | orchestrator | Thursday 19 March 2026 04:49:25 +0000 (0:00:00.904) 0:13:18.787 ******** 2026-03-19 04:49:33.756607 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-19 04:49:33.756616 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-19 04:49:33.756625 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-19 04:49:33.756633 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-19 04:49:33.756642 | orchestrator | skipping: [testbed-node-2] 2026-03-19 04:49:33.756651 | orchestrator | 2026-03-19 04:49:33.756659 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-19 04:49:33.756668 | orchestrator | 2026-03-19 04:49:33.756677 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:49:33.756685 | orchestrator | Thursday 19 March 2026 04:49:26 +0000 (0:00:01.108) 0:13:19.895 ******** 2026-03-19 04:49:33.756694 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:49:33.756703 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:49:33.756711 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:49:33.756744 | orchestrator | 2026-03-19 04:49:33.756753 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:49:33.756762 | orchestrator | Thursday 19 March 2026 04:49:27 +0000 (0:00:00.612) 0:13:20.508 ******** 2026-03-19 04:49:33.756770 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:49:33.756779 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:49:33.756788 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:49:33.756796 | orchestrator | 2026-03-19 04:49:33.756805 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-19 04:49:33.756813 | orchestrator | Thursday 19 March 2026 04:49:28 +0000 (0:00:00.815) 0:13:21.323 ******** 2026-03-19 04:49:33.756822 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:49:33.756831 | orchestrator | 2026-03-19 04:49:33.756839 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-19 04:49:33.756853 | orchestrator | Thursday 19 March 2026 04:49:31 +0000 (0:00:03.189) 0:13:24.513 ******** 2026-03-19 04:49:33.756862 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:49:33.756870 | orchestrator | 2026-03-19 04:49:33.756879 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-19 04:49:33.756887 | orchestrator | Thursday 19 March 2026 04:49:33 +0000 (0:00:01.962) 0:13:26.475 ******** 2026-03-19 04:49:33.756900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-19T02:35:25.138057+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:33.756926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-19T02:36:31.099450+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '31', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '29', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.168616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-19T02:36:34.871845+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '31', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '29', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.168790 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-19T02:37:35.034044+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '40', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '38', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.168832 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-19T02:37:40.691312+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '60', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '54', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.168858 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-19T02:37:46.549136+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '60', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '54', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.878681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-19T02:37:53.038313+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '171', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '56', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.878848 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-19T02:37:59.303397+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '60', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '56', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.878895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-19T02:38:11.252905+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '60', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '58', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.878910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-19T02:38:56.210886+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 68, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:49:34.878944 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-19T02:39:05.696199+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 76, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:51:00.295645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-19T02:39:14.170103+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '182', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 182, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:51:00.295834 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-19T02:39:22.851772+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '92', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 92, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:51:00.295873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-19T02:39:32.216390+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '100', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 100, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-19 04:51:00.295889 | orchestrator | 2026-03-19 04:51:00.295899 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-19 04:51:00.295907 | orchestrator | Thursday 19 March 2026 04:49:34 +0000 (0:00:01.664) 0:13:28.140 ******** 2026-03-19 04:51:00.295915 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:51:00.295923 | orchestrator | 2026-03-19 04:51:00.295930 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-19 04:51:00.295937 | orchestrator | Thursday 19 March 2026 04:49:36 +0000 (0:00:01.859) 0:13:29.999 ******** 2026-03-19 04:51:00.295944 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-19 04:51:00.295953 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-19 04:51:00.295961 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-19 04:51:00.295969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-19 04:51:00.295978 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-19 04:51:00.295985 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-19 04:51:00.295992 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-19 04:51:00.296000 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-19 04:51:00.296007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-19 04:51:00.296014 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-19 04:51:00.296021 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-19 04:51:00.296028 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-19 04:51:00.296036 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-19 04:51:00.296043 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-19 04:51:00.296050 | orchestrator | 2026-03-19 04:51:00.296057 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-19 04:51:00.296065 | orchestrator | Thursday 19 March 2026 04:50:53 +0000 (0:01:17.044) 0:14:47.044 ******** 2026-03-19 04:51:00.296077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-19 04:51:07.277034 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-19 04:51:07.277135 | orchestrator | 2026-03-19 04:51:07.277149 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-19 04:51:07.277159 | orchestrator | 2026-03-19 04:51:07.277168 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:51:07.277192 | orchestrator | Thursday 19 March 2026 04:51:00 +0000 (0:00:06.500) 0:14:53.545 ******** 2026-03-19 04:51:07.277201 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-19 04:51:07.277228 | orchestrator | 2026-03-19 04:51:07.277237 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:51:07.277246 | orchestrator | Thursday 19 March 2026 04:51:00 +0000 (0:00:00.243) 0:14:53.789 ******** 2026-03-19 04:51:07.277255 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277265 | orchestrator | 2026-03-19 04:51:07.277274 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:51:07.277282 | orchestrator | Thursday 19 March 2026 04:51:00 +0000 (0:00:00.446) 0:14:54.235 ******** 2026-03-19 04:51:07.277291 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277300 | orchestrator | 2026-03-19 04:51:07.277309 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:51:07.277317 | orchestrator | Thursday 19 March 2026 04:51:01 +0000 (0:00:00.142) 0:14:54.378 ******** 2026-03-19 04:51:07.277326 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277334 | orchestrator | 2026-03-19 04:51:07.277343 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:51:07.277352 | orchestrator | Thursday 19 March 2026 04:51:01 +0000 (0:00:00.732) 0:14:55.110 ******** 2026-03-19 04:51:07.277361 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277369 | orchestrator | 2026-03-19 04:51:07.277378 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:51:07.277387 | orchestrator | Thursday 19 March 2026 04:51:01 +0000 (0:00:00.148) 0:14:55.259 ******** 2026-03-19 04:51:07.277395 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277404 | orchestrator | 2026-03-19 04:51:07.277413 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:51:07.277421 | orchestrator | Thursday 19 March 2026 04:51:02 +0000 (0:00:00.142) 0:14:55.401 ******** 2026-03-19 04:51:07.277430 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277439 | orchestrator | 2026-03-19 04:51:07.277448 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:51:07.277457 | orchestrator | Thursday 19 March 2026 04:51:02 +0000 (0:00:00.165) 0:14:55.566 ******** 2026-03-19 04:51:07.277466 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:07.277475 | orchestrator | 2026-03-19 04:51:07.277484 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:51:07.277493 | orchestrator | Thursday 19 March 2026 04:51:02 +0000 (0:00:00.147) 0:14:55.714 ******** 2026-03-19 04:51:07.277502 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277510 | orchestrator | 2026-03-19 04:51:07.277519 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:51:07.277528 | orchestrator | Thursday 19 March 2026 04:51:02 +0000 (0:00:00.128) 0:14:55.843 ******** 2026-03-19 04:51:07.277537 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:51:07.277545 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:51:07.277554 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:51:07.277563 | orchestrator | 2026-03-19 04:51:07.277571 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:51:07.277582 | orchestrator | Thursday 19 March 2026 04:51:03 +0000 (0:00:00.669) 0:14:56.512 ******** 2026-03-19 04:51:07.277592 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:07.277603 | orchestrator | 2026-03-19 04:51:07.277613 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:51:07.277624 | orchestrator | Thursday 19 March 2026 04:51:03 +0000 (0:00:00.264) 0:14:56.777 ******** 2026-03-19 04:51:07.277634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:51:07.277644 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:51:07.277655 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:51:07.277671 | orchestrator | 2026-03-19 04:51:07.277717 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:51:07.277732 | orchestrator | Thursday 19 March 2026 04:51:05 +0000 (0:00:02.109) 0:14:58.887 ******** 2026-03-19 04:51:07.277746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 04:51:07.277759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 04:51:07.277773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 04:51:07.277786 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:07.277799 | orchestrator | 2026-03-19 04:51:07.277813 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:51:07.277827 | orchestrator | Thursday 19 March 2026 04:51:06 +0000 (0:00:00.380) 0:14:59.267 ******** 2026-03-19 04:51:07.277843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.277860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.277892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.277907 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:07.277920 | orchestrator | 2026-03-19 04:51:07.277942 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:51:07.277955 | orchestrator | Thursday 19 March 2026 04:51:06 +0000 (0:00:00.879) 0:15:00.146 ******** 2026-03-19 04:51:07.277972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.277989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.278003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:07.278090 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:07.278106 | orchestrator | 2026-03-19 04:51:07.278121 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:51:07.278135 | orchestrator | Thursday 19 March 2026 04:51:07 +0000 (0:00:00.186) 0:15:00.332 ******** 2026-03-19 04:51:07.278152 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:51:04.039699', 'end': '2026-03-19 04:51:04.084153', 'delta': '0:00:00.044454', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:51:07.278183 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:51:04.608647', 'end': '2026-03-19 04:51:04.645698', 'delta': '0:00:00.037051', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:51:07.278211 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:51:05.436509', 'end': '2026-03-19 04:51:05.481245', 'delta': '0:00:00.044736', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:51:11.629162 | orchestrator | 2026-03-19 04:51:11.629278 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:51:11.629296 | orchestrator | Thursday 19 March 2026 04:51:07 +0000 (0:00:00.202) 0:15:00.535 ******** 2026-03-19 04:51:11.629309 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629321 | orchestrator | 2026-03-19 04:51:11.629351 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:51:11.629364 | orchestrator | Thursday 19 March 2026 04:51:08 +0000 (0:00:00.843) 0:15:01.378 ******** 2026-03-19 04:51:11.629375 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629387 | orchestrator | 2026-03-19 04:51:11.629399 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:51:11.629410 | orchestrator | Thursday 19 March 2026 04:51:08 +0000 (0:00:00.253) 0:15:01.632 ******** 2026-03-19 04:51:11.629422 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629433 | orchestrator | 2026-03-19 04:51:11.629444 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:51:11.629455 | orchestrator | Thursday 19 March 2026 04:51:08 +0000 (0:00:00.149) 0:15:01.781 ******** 2026-03-19 04:51:11.629467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:51:11.629478 | orchestrator | 2026-03-19 04:51:11.629489 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:51:11.629500 | orchestrator | Thursday 19 March 2026 04:51:09 +0000 (0:00:01.007) 0:15:02.788 ******** 2026-03-19 04:51:11.629512 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629523 | orchestrator | 2026-03-19 04:51:11.629534 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:51:11.629545 | orchestrator | Thursday 19 March 2026 04:51:09 +0000 (0:00:00.134) 0:15:02.923 ******** 2026-03-19 04:51:11.629556 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629568 | orchestrator | 2026-03-19 04:51:11.629579 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:51:11.629590 | orchestrator | Thursday 19 March 2026 04:51:09 +0000 (0:00:00.153) 0:15:03.076 ******** 2026-03-19 04:51:11.629602 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629634 | orchestrator | 2026-03-19 04:51:11.629646 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:51:11.629657 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.216) 0:15:03.292 ******** 2026-03-19 04:51:11.629669 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629680 | orchestrator | 2026-03-19 04:51:11.629719 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:51:11.629734 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.127) 0:15:03.420 ******** 2026-03-19 04:51:11.629747 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629759 | orchestrator | 2026-03-19 04:51:11.629772 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:51:11.629783 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.122) 0:15:03.542 ******** 2026-03-19 04:51:11.629794 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629804 | orchestrator | 2026-03-19 04:51:11.629815 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:51:11.629826 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.158) 0:15:03.701 ******** 2026-03-19 04:51:11.629836 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629847 | orchestrator | 2026-03-19 04:51:11.629858 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:51:11.629869 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.132) 0:15:03.834 ******** 2026-03-19 04:51:11.629879 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629890 | orchestrator | 2026-03-19 04:51:11.629901 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:51:11.629912 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.166) 0:15:04.000 ******** 2026-03-19 04:51:11.629922 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:11.629933 | orchestrator | 2026-03-19 04:51:11.629944 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:51:11.629955 | orchestrator | Thursday 19 March 2026 04:51:10 +0000 (0:00:00.124) 0:15:04.125 ******** 2026-03-19 04:51:11.629966 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:11.629990 | orchestrator | 2026-03-19 04:51:11.630064 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:51:11.630078 | orchestrator | Thursday 19 March 2026 04:51:11 +0000 (0:00:00.151) 0:15:04.276 ******** 2026-03-19 04:51:11.630091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:11.630126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}})  2026-03-19 04:51:11.630149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:51:11.630180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}})  2026-03-19 04:51:11.630193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:11.630206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:11.630217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:51:11.630229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:11.630241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:51:11.630266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:12.227819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}})  2026-03-19 04:51:12.227995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}})  2026-03-19 04:51:12.228040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:12.228082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:51:12.228224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:12.228258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:51:12.228292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:51:12.228318 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:12.228337 | orchestrator | 2026-03-19 04:51:12.228354 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:51:12.228371 | orchestrator | Thursday 19 March 2026 04:51:12 +0000 (0:00:01.005) 0:15:05.282 ******** 2026-03-19 04:51:12.228388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.228406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.228425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.228482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:12.405467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:20.689611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:20.689855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:20.689878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:51:20.689912 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.689925 | orchestrator | 2026-03-19 04:51:20.689935 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:51:20.689944 | orchestrator | Thursday 19 March 2026 04:51:12 +0000 (0:00:00.379) 0:15:05.662 ******** 2026-03-19 04:51:20.689966 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:20.689977 | orchestrator | 2026-03-19 04:51:20.689986 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:51:20.689994 | orchestrator | Thursday 19 March 2026 04:51:12 +0000 (0:00:00.506) 0:15:06.169 ******** 2026-03-19 04:51:20.690063 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:20.690074 | orchestrator | 2026-03-19 04:51:20.690083 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:51:20.690092 | orchestrator | Thursday 19 March 2026 04:51:13 +0000 (0:00:00.151) 0:15:06.321 ******** 2026-03-19 04:51:20.690100 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:20.690109 | orchestrator | 2026-03-19 04:51:20.690131 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:51:20.690142 | orchestrator | Thursday 19 March 2026 04:51:13 +0000 (0:00:00.480) 0:15:06.801 ******** 2026-03-19 04:51:20.690188 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690199 | orchestrator | 2026-03-19 04:51:20.690209 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:51:20.690220 | orchestrator | Thursday 19 March 2026 04:51:13 +0000 (0:00:00.122) 0:15:06.924 ******** 2026-03-19 04:51:20.690229 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690240 | orchestrator | 2026-03-19 04:51:20.690250 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:51:20.690260 | orchestrator | Thursday 19 March 2026 04:51:13 +0000 (0:00:00.235) 0:15:07.160 ******** 2026-03-19 04:51:20.690270 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690280 | orchestrator | 2026-03-19 04:51:20.690290 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:51:20.690301 | orchestrator | Thursday 19 March 2026 04:51:14 +0000 (0:00:00.160) 0:15:07.320 ******** 2026-03-19 04:51:20.690311 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 04:51:20.690322 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 04:51:20.690332 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 04:51:20.690341 | orchestrator | 2026-03-19 04:51:20.690352 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:51:20.690363 | orchestrator | Thursday 19 March 2026 04:51:14 +0000 (0:00:00.933) 0:15:08.253 ******** 2026-03-19 04:51:20.690373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 04:51:20.690384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 04:51:20.690393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 04:51:20.690403 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690414 | orchestrator | 2026-03-19 04:51:20.690423 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:51:20.690434 | orchestrator | Thursday 19 March 2026 04:51:15 +0000 (0:00:00.163) 0:15:08.417 ******** 2026-03-19 04:51:20.690464 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-19 04:51:20.690475 | orchestrator | 2026-03-19 04:51:20.690486 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:51:20.690497 | orchestrator | Thursday 19 March 2026 04:51:15 +0000 (0:00:00.227) 0:15:08.645 ******** 2026-03-19 04:51:20.690507 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690518 | orchestrator | 2026-03-19 04:51:20.690528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:51:20.690547 | orchestrator | Thursday 19 March 2026 04:51:15 +0000 (0:00:00.397) 0:15:09.042 ******** 2026-03-19 04:51:20.690556 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690565 | orchestrator | 2026-03-19 04:51:20.690574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:51:20.690583 | orchestrator | Thursday 19 March 2026 04:51:15 +0000 (0:00:00.139) 0:15:09.181 ******** 2026-03-19 04:51:20.690591 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690600 | orchestrator | 2026-03-19 04:51:20.690609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:51:20.690618 | orchestrator | Thursday 19 March 2026 04:51:16 +0000 (0:00:00.145) 0:15:09.327 ******** 2026-03-19 04:51:20.690626 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:20.690635 | orchestrator | 2026-03-19 04:51:20.690644 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:51:20.690652 | orchestrator | Thursday 19 March 2026 04:51:16 +0000 (0:00:00.267) 0:15:09.594 ******** 2026-03-19 04:51:20.690661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:51:20.690670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:51:20.690679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:51:20.690782 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690796 | orchestrator | 2026-03-19 04:51:20.690805 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:51:20.690814 | orchestrator | Thursday 19 March 2026 04:51:16 +0000 (0:00:00.394) 0:15:09.989 ******** 2026-03-19 04:51:20.690822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:51:20.690831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:51:20.690839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:51:20.690848 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690857 | orchestrator | 2026-03-19 04:51:20.690865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:51:20.690873 | orchestrator | Thursday 19 March 2026 04:51:17 +0000 (0:00:00.418) 0:15:10.407 ******** 2026-03-19 04:51:20.690881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:51:20.690889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:51:20.690897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:51:20.690905 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:20.690913 | orchestrator | 2026-03-19 04:51:20.690921 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:51:20.690929 | orchestrator | Thursday 19 March 2026 04:51:17 +0000 (0:00:00.389) 0:15:10.797 ******** 2026-03-19 04:51:20.690936 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:20.690944 | orchestrator | 2026-03-19 04:51:20.690952 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:51:20.690960 | orchestrator | Thursday 19 March 2026 04:51:17 +0000 (0:00:00.161) 0:15:10.958 ******** 2026-03-19 04:51:20.690968 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 04:51:20.690976 | orchestrator | 2026-03-19 04:51:20.690984 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:51:20.690997 | orchestrator | Thursday 19 March 2026 04:51:18 +0000 (0:00:00.367) 0:15:11.326 ******** 2026-03-19 04:51:20.691005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:51:20.691013 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:51:20.691021 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:51:20.691029 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 04:51:20.691037 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:51:20.691044 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:51:20.691059 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:51:20.691067 | orchestrator | 2026-03-19 04:51:20.691075 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:51:20.691082 | orchestrator | Thursday 19 March 2026 04:51:19 +0000 (0:00:01.070) 0:15:12.396 ******** 2026-03-19 04:51:20.691090 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:51:20.691098 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:51:20.691106 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:51:20.691113 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 04:51:20.691121 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:51:20.691129 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:51:20.691137 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:51:20.691145 | orchestrator | 2026-03-19 04:51:20.691159 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-19 04:51:35.537769 | orchestrator | Thursday 19 March 2026 04:51:20 +0000 (0:00:01.537) 0:15:13.934 ******** 2026-03-19 04:51:35.537921 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.537939 | orchestrator | 2026-03-19 04:51:35.537953 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-19 04:51:35.537964 | orchestrator | Thursday 19 March 2026 04:51:21 +0000 (0:00:00.492) 0:15:14.426 ******** 2026-03-19 04:51:35.537976 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.537988 | orchestrator | 2026-03-19 04:51:35.537999 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-19 04:51:35.538010 | orchestrator | Thursday 19 March 2026 04:51:21 +0000 (0:00:00.135) 0:15:14.562 ******** 2026-03-19 04:51:35.538091 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.538104 | orchestrator | 2026-03-19 04:51:35.538116 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-19 04:51:35.538127 | orchestrator | Thursday 19 March 2026 04:51:22 +0000 (0:00:00.851) 0:15:15.413 ******** 2026-03-19 04:51:35.538138 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-19 04:51:35.538155 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-19 04:51:35.538174 | orchestrator | 2026-03-19 04:51:35.538193 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:51:35.538211 | orchestrator | Thursday 19 March 2026 04:51:25 +0000 (0:00:03.170) 0:15:18.583 ******** 2026-03-19 04:51:35.538231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-19 04:51:35.538251 | orchestrator | 2026-03-19 04:51:35.538270 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:51:35.538289 | orchestrator | Thursday 19 March 2026 04:51:25 +0000 (0:00:00.220) 0:15:18.804 ******** 2026-03-19 04:51:35.538309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-19 04:51:35.538328 | orchestrator | 2026-03-19 04:51:35.538346 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:51:35.538366 | orchestrator | Thursday 19 March 2026 04:51:25 +0000 (0:00:00.213) 0:15:19.018 ******** 2026-03-19 04:51:35.538386 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.538406 | orchestrator | 2026-03-19 04:51:35.538425 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:51:35.538444 | orchestrator | Thursday 19 March 2026 04:51:25 +0000 (0:00:00.122) 0:15:19.140 ******** 2026-03-19 04:51:35.538463 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.538483 | orchestrator | 2026-03-19 04:51:35.538502 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:51:35.538557 | orchestrator | Thursday 19 March 2026 04:51:26 +0000 (0:00:00.534) 0:15:19.674 ******** 2026-03-19 04:51:35.538578 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.538597 | orchestrator | 2026-03-19 04:51:35.538617 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:51:35.538636 | orchestrator | Thursday 19 March 2026 04:51:26 +0000 (0:00:00.533) 0:15:20.208 ******** 2026-03-19 04:51:35.538653 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.538672 | orchestrator | 2026-03-19 04:51:35.538749 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:51:35.538770 | orchestrator | Thursday 19 March 2026 04:51:27 +0000 (0:00:00.565) 0:15:20.773 ******** 2026-03-19 04:51:35.538789 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.538808 | orchestrator | 2026-03-19 04:51:35.538828 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:51:35.538848 | orchestrator | Thursday 19 March 2026 04:51:27 +0000 (0:00:00.134) 0:15:20.907 ******** 2026-03-19 04:51:35.538866 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.538884 | orchestrator | 2026-03-19 04:51:35.538904 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:51:35.538944 | orchestrator | Thursday 19 March 2026 04:51:27 +0000 (0:00:00.131) 0:15:21.039 ******** 2026-03-19 04:51:35.538957 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.538968 | orchestrator | 2026-03-19 04:51:35.538980 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:51:35.538991 | orchestrator | Thursday 19 March 2026 04:51:27 +0000 (0:00:00.127) 0:15:21.167 ******** 2026-03-19 04:51:35.539002 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539013 | orchestrator | 2026-03-19 04:51:35.539024 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:51:35.539035 | orchestrator | Thursday 19 March 2026 04:51:28 +0000 (0:00:00.793) 0:15:21.960 ******** 2026-03-19 04:51:35.539046 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539057 | orchestrator | 2026-03-19 04:51:35.539067 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:51:35.539078 | orchestrator | Thursday 19 March 2026 04:51:29 +0000 (0:00:00.548) 0:15:22.509 ******** 2026-03-19 04:51:35.539089 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539100 | orchestrator | 2026-03-19 04:51:35.539111 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:51:35.539122 | orchestrator | Thursday 19 March 2026 04:51:29 +0000 (0:00:00.139) 0:15:22.648 ******** 2026-03-19 04:51:35.539133 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539144 | orchestrator | 2026-03-19 04:51:35.539155 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:51:35.539166 | orchestrator | Thursday 19 March 2026 04:51:29 +0000 (0:00:00.142) 0:15:22.791 ******** 2026-03-19 04:51:35.539177 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539187 | orchestrator | 2026-03-19 04:51:35.539198 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:51:35.539209 | orchestrator | Thursday 19 March 2026 04:51:29 +0000 (0:00:00.145) 0:15:22.937 ******** 2026-03-19 04:51:35.539220 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539231 | orchestrator | 2026-03-19 04:51:35.539242 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:51:35.539253 | orchestrator | Thursday 19 March 2026 04:51:29 +0000 (0:00:00.161) 0:15:23.098 ******** 2026-03-19 04:51:35.539264 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539275 | orchestrator | 2026-03-19 04:51:35.539312 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:51:35.539324 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.172) 0:15:23.270 ******** 2026-03-19 04:51:35.539335 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539347 | orchestrator | 2026-03-19 04:51:35.539357 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:51:35.539381 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.136) 0:15:23.407 ******** 2026-03-19 04:51:35.539392 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539403 | orchestrator | 2026-03-19 04:51:35.539413 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:51:35.539424 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.140) 0:15:23.548 ******** 2026-03-19 04:51:35.539435 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539446 | orchestrator | 2026-03-19 04:51:35.539457 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:51:35.539467 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.138) 0:15:23.686 ******** 2026-03-19 04:51:35.539478 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539489 | orchestrator | 2026-03-19 04:51:35.539500 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:51:35.539510 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.141) 0:15:23.828 ******** 2026-03-19 04:51:35.539521 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.539532 | orchestrator | 2026-03-19 04:51:35.539543 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:51:35.539556 | orchestrator | Thursday 19 March 2026 04:51:30 +0000 (0:00:00.216) 0:15:24.044 ******** 2026-03-19 04:51:35.539575 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539593 | orchestrator | 2026-03-19 04:51:35.539612 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:51:35.539630 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.465) 0:15:24.510 ******** 2026-03-19 04:51:35.539646 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539663 | orchestrator | 2026-03-19 04:51:35.539681 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:51:35.539726 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.144) 0:15:24.655 ******** 2026-03-19 04:51:35.539745 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539764 | orchestrator | 2026-03-19 04:51:35.539782 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:51:35.539799 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.135) 0:15:24.790 ******** 2026-03-19 04:51:35.539818 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539836 | orchestrator | 2026-03-19 04:51:35.539856 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:51:35.539875 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.135) 0:15:24.925 ******** 2026-03-19 04:51:35.539894 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539913 | orchestrator | 2026-03-19 04:51:35.539932 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:51:35.539951 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.127) 0:15:25.053 ******** 2026-03-19 04:51:35.539969 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.539986 | orchestrator | 2026-03-19 04:51:35.540005 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:51:35.540024 | orchestrator | Thursday 19 March 2026 04:51:31 +0000 (0:00:00.131) 0:15:25.184 ******** 2026-03-19 04:51:35.540043 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540062 | orchestrator | 2026-03-19 04:51:35.540081 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:51:35.540096 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.127) 0:15:25.312 ******** 2026-03-19 04:51:35.540106 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540117 | orchestrator | 2026-03-19 04:51:35.540129 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:51:35.540140 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.117) 0:15:25.429 ******** 2026-03-19 04:51:35.540151 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540162 | orchestrator | 2026-03-19 04:51:35.540173 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:51:35.540196 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.134) 0:15:25.564 ******** 2026-03-19 04:51:35.540207 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540218 | orchestrator | 2026-03-19 04:51:35.540229 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:51:35.540240 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.126) 0:15:25.690 ******** 2026-03-19 04:51:35.540251 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540262 | orchestrator | 2026-03-19 04:51:35.540273 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:51:35.540283 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.123) 0:15:25.814 ******** 2026-03-19 04:51:35.540294 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:35.540305 | orchestrator | 2026-03-19 04:51:35.540316 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:51:35.540327 | orchestrator | Thursday 19 March 2026 04:51:32 +0000 (0:00:00.199) 0:15:26.014 ******** 2026-03-19 04:51:35.540338 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.540349 | orchestrator | 2026-03-19 04:51:35.540360 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:51:35.540371 | orchestrator | Thursday 19 March 2026 04:51:34 +0000 (0:00:01.309) 0:15:27.323 ******** 2026-03-19 04:51:35.540381 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:35.540393 | orchestrator | 2026-03-19 04:51:35.540403 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:51:35.540414 | orchestrator | Thursday 19 March 2026 04:51:35 +0000 (0:00:01.284) 0:15:28.608 ******** 2026-03-19 04:51:35.540425 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-19 04:51:35.540436 | orchestrator | 2026-03-19 04:51:35.540458 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:51:51.361034 | orchestrator | Thursday 19 March 2026 04:51:35 +0000 (0:00:00.184) 0:15:28.793 ******** 2026-03-19 04:51:51.361185 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.361212 | orchestrator | 2026-03-19 04:51:51.361234 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:51:51.361254 | orchestrator | Thursday 19 March 2026 04:51:35 +0000 (0:00:00.126) 0:15:28.919 ******** 2026-03-19 04:51:51.361274 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.361293 | orchestrator | 2026-03-19 04:51:51.361313 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:51:51.361330 | orchestrator | Thursday 19 March 2026 04:51:35 +0000 (0:00:00.141) 0:15:29.061 ******** 2026-03-19 04:51:51.361349 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:51:51.361368 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:51:51.361389 | orchestrator | 2026-03-19 04:51:51.361408 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:51:51.361427 | orchestrator | Thursday 19 March 2026 04:51:36 +0000 (0:00:00.823) 0:15:29.885 ******** 2026-03-19 04:51:51.361445 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:51.361465 | orchestrator | 2026-03-19 04:51:51.361484 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:51:51.361504 | orchestrator | Thursday 19 March 2026 04:51:37 +0000 (0:00:00.464) 0:15:30.349 ******** 2026-03-19 04:51:51.361523 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.361540 | orchestrator | 2026-03-19 04:51:51.361561 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:51:51.361581 | orchestrator | Thursday 19 March 2026 04:51:37 +0000 (0:00:00.150) 0:15:30.499 ******** 2026-03-19 04:51:51.361602 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.361622 | orchestrator | 2026-03-19 04:51:51.361642 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:51:51.361662 | orchestrator | Thursday 19 March 2026 04:51:37 +0000 (0:00:00.137) 0:15:30.636 ******** 2026-03-19 04:51:51.361769 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.361792 | orchestrator | 2026-03-19 04:51:51.361814 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:51:51.361835 | orchestrator | Thursday 19 March 2026 04:51:37 +0000 (0:00:00.115) 0:15:30.751 ******** 2026-03-19 04:51:51.361855 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-19 04:51:51.361876 | orchestrator | 2026-03-19 04:51:51.361896 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:51:51.361917 | orchestrator | Thursday 19 March 2026 04:51:37 +0000 (0:00:00.191) 0:15:30.943 ******** 2026-03-19 04:51:51.361936 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:51.361956 | orchestrator | 2026-03-19 04:51:51.362121 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:51:51.362149 | orchestrator | Thursday 19 March 2026 04:51:38 +0000 (0:00:00.704) 0:15:31.647 ******** 2026-03-19 04:51:51.362168 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:51:51.362187 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:51:51.362207 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:51:51.362227 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362246 | orchestrator | 2026-03-19 04:51:51.362266 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:51:51.362282 | orchestrator | Thursday 19 March 2026 04:51:38 +0000 (0:00:00.408) 0:15:32.056 ******** 2026-03-19 04:51:51.362299 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362317 | orchestrator | 2026-03-19 04:51:51.362341 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:51:51.362359 | orchestrator | Thursday 19 March 2026 04:51:38 +0000 (0:00:00.127) 0:15:32.183 ******** 2026-03-19 04:51:51.362375 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362393 | orchestrator | 2026-03-19 04:51:51.362410 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:51:51.362427 | orchestrator | Thursday 19 March 2026 04:51:39 +0000 (0:00:00.169) 0:15:32.353 ******** 2026-03-19 04:51:51.362445 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362462 | orchestrator | 2026-03-19 04:51:51.362479 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:51:51.362496 | orchestrator | Thursday 19 March 2026 04:51:39 +0000 (0:00:00.150) 0:15:32.503 ******** 2026-03-19 04:51:51.362513 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362531 | orchestrator | 2026-03-19 04:51:51.362548 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:51:51.362565 | orchestrator | Thursday 19 March 2026 04:51:39 +0000 (0:00:00.151) 0:15:32.655 ******** 2026-03-19 04:51:51.362581 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362598 | orchestrator | 2026-03-19 04:51:51.362616 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:51:51.362634 | orchestrator | Thursday 19 March 2026 04:51:39 +0000 (0:00:00.143) 0:15:32.798 ******** 2026-03-19 04:51:51.362651 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:51.362668 | orchestrator | 2026-03-19 04:51:51.362704 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:51:51.362722 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:01.516) 0:15:34.315 ******** 2026-03-19 04:51:51.362738 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:51.362756 | orchestrator | 2026-03-19 04:51:51.362772 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:51:51.362789 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:00.133) 0:15:34.448 ******** 2026-03-19 04:51:51.362804 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-19 04:51:51.362821 | orchestrator | 2026-03-19 04:51:51.362872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:51:51.362891 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:00.218) 0:15:34.667 ******** 2026-03-19 04:51:51.362906 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362923 | orchestrator | 2026-03-19 04:51:51.362940 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:51:51.362957 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:00.198) 0:15:34.866 ******** 2026-03-19 04:51:51.362973 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.362990 | orchestrator | 2026-03-19 04:51:51.363005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:51:51.363021 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:00.144) 0:15:35.010 ******** 2026-03-19 04:51:51.363038 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363055 | orchestrator | 2026-03-19 04:51:51.363073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:51:51.363090 | orchestrator | Thursday 19 March 2026 04:51:41 +0000 (0:00:00.134) 0:15:35.145 ******** 2026-03-19 04:51:51.363106 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363121 | orchestrator | 2026-03-19 04:51:51.363138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:51:51.363156 | orchestrator | Thursday 19 March 2026 04:51:42 +0000 (0:00:00.396) 0:15:35.541 ******** 2026-03-19 04:51:51.363173 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363189 | orchestrator | 2026-03-19 04:51:51.363205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:51:51.363221 | orchestrator | Thursday 19 March 2026 04:51:42 +0000 (0:00:00.151) 0:15:35.693 ******** 2026-03-19 04:51:51.363238 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363255 | orchestrator | 2026-03-19 04:51:51.363272 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:51:51.363289 | orchestrator | Thursday 19 March 2026 04:51:42 +0000 (0:00:00.149) 0:15:35.842 ******** 2026-03-19 04:51:51.363305 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363320 | orchestrator | 2026-03-19 04:51:51.363337 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:51:51.363353 | orchestrator | Thursday 19 March 2026 04:51:42 +0000 (0:00:00.160) 0:15:36.002 ******** 2026-03-19 04:51:51.363371 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:51:51.363387 | orchestrator | 2026-03-19 04:51:51.363404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:51:51.363419 | orchestrator | Thursday 19 March 2026 04:51:42 +0000 (0:00:00.136) 0:15:36.139 ******** 2026-03-19 04:51:51.363435 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:51:51.363451 | orchestrator | 2026-03-19 04:51:51.363467 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:51:51.363483 | orchestrator | Thursday 19 March 2026 04:51:43 +0000 (0:00:00.230) 0:15:36.370 ******** 2026-03-19 04:51:51.363501 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-19 04:51:51.363518 | orchestrator | 2026-03-19 04:51:51.363534 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:51:51.363551 | orchestrator | Thursday 19 March 2026 04:51:43 +0000 (0:00:00.200) 0:15:36.570 ******** 2026-03-19 04:51:51.363566 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-19 04:51:51.363583 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-19 04:51:51.363601 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-19 04:51:51.363618 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-19 04:51:51.363634 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-19 04:51:51.363650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-19 04:51:51.363666 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-19 04:51:51.363741 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:51:51.363771 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:51:51.363788 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:51:51.363806 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:51:51.363823 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:51:51.363839 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:51:51.363856 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:51:51.363872 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-19 04:51:51.363888 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-19 04:51:51.363905 | orchestrator | 2026-03-19 04:51:51.363922 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:51:51.363939 | orchestrator | Thursday 19 March 2026 04:51:48 +0000 (0:00:05.685) 0:15:42.256 ******** 2026-03-19 04:51:51.363956 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-19 04:51:51.363971 | orchestrator | 2026-03-19 04:51:51.363988 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 04:51:51.364005 | orchestrator | Thursday 19 March 2026 04:51:49 +0000 (0:00:00.555) 0:15:42.812 ******** 2026-03-19 04:51:51.364023 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 04:51:51.364041 | orchestrator | 2026-03-19 04:51:51.364058 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 04:51:51.364073 | orchestrator | Thursday 19 March 2026 04:51:50 +0000 (0:00:00.522) 0:15:43.335 ******** 2026-03-19 04:51:51.364091 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 04:51:51.364107 | orchestrator | 2026-03-19 04:51:51.364160 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:52:09.609301 | orchestrator | Thursday 19 March 2026 04:51:51 +0000 (0:00:01.276) 0:15:44.612 ******** 2026-03-19 04:52:09.609438 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609467 | orchestrator | 2026-03-19 04:52:09.609489 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:52:09.609510 | orchestrator | Thursday 19 March 2026 04:51:51 +0000 (0:00:00.141) 0:15:44.753 ******** 2026-03-19 04:52:09.609528 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609547 | orchestrator | 2026-03-19 04:52:09.609567 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:52:09.609586 | orchestrator | Thursday 19 March 2026 04:51:51 +0000 (0:00:00.141) 0:15:44.894 ******** 2026-03-19 04:52:09.609604 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609622 | orchestrator | 2026-03-19 04:52:09.609634 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:52:09.609645 | orchestrator | Thursday 19 March 2026 04:51:51 +0000 (0:00:00.121) 0:15:45.015 ******** 2026-03-19 04:52:09.609683 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609729 | orchestrator | 2026-03-19 04:52:09.609740 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:52:09.609751 | orchestrator | Thursday 19 March 2026 04:51:51 +0000 (0:00:00.120) 0:15:45.136 ******** 2026-03-19 04:52:09.609763 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609773 | orchestrator | 2026-03-19 04:52:09.609785 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:52:09.609797 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.132) 0:15:45.269 ******** 2026-03-19 04:52:09.609807 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609818 | orchestrator | 2026-03-19 04:52:09.609829 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:52:09.609874 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.152) 0:15:45.421 ******** 2026-03-19 04:52:09.609888 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609900 | orchestrator | 2026-03-19 04:52:09.609913 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:52:09.609926 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.139) 0:15:45.561 ******** 2026-03-19 04:52:09.609939 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.609951 | orchestrator | 2026-03-19 04:52:09.609965 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:52:09.609977 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.139) 0:15:45.701 ******** 2026-03-19 04:52:09.609990 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610002 | orchestrator | 2026-03-19 04:52:09.610015 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:52:09.610119 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.133) 0:15:45.834 ******** 2026-03-19 04:52:09.610132 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610145 | orchestrator | 2026-03-19 04:52:09.610157 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:52:09.610170 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.139) 0:15:45.974 ******** 2026-03-19 04:52:09.610184 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.610198 | orchestrator | 2026-03-19 04:52:09.610209 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:52:09.610220 | orchestrator | Thursday 19 March 2026 04:51:52 +0000 (0:00:00.195) 0:15:46.169 ******** 2026-03-19 04:52:09.610231 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-19 04:52:09.610242 | orchestrator | 2026-03-19 04:52:09.610253 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:52:09.610277 | orchestrator | Thursday 19 March 2026 04:51:56 +0000 (0:00:03.637) 0:15:49.807 ******** 2026-03-19 04:52:09.610288 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 04:52:09.610301 | orchestrator | 2026-03-19 04:52:09.610312 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:52:09.610322 | orchestrator | Thursday 19 March 2026 04:51:56 +0000 (0:00:00.447) 0:15:50.254 ******** 2026-03-19 04:52:09.610336 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-19 04:52:09.610352 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-19 04:52:09.610364 | orchestrator | 2026-03-19 04:52:09.610375 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:52:09.610386 | orchestrator | Thursday 19 March 2026 04:52:03 +0000 (0:00:06.995) 0:15:57.249 ******** 2026-03-19 04:52:09.610397 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610408 | orchestrator | 2026-03-19 04:52:09.610418 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:52:09.610429 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.115) 0:15:57.365 ******** 2026-03-19 04:52:09.610440 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610452 | orchestrator | 2026-03-19 04:52:09.610493 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:52:09.610529 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.119) 0:15:57.485 ******** 2026-03-19 04:52:09.610547 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610566 | orchestrator | 2026-03-19 04:52:09.610585 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:52:09.610604 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.136) 0:15:57.622 ******** 2026-03-19 04:52:09.610622 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610641 | orchestrator | 2026-03-19 04:52:09.610661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:52:09.610678 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.145) 0:15:57.767 ******** 2026-03-19 04:52:09.610722 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610733 | orchestrator | 2026-03-19 04:52:09.610745 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:52:09.610756 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.134) 0:15:57.902 ******** 2026-03-19 04:52:09.610767 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.610778 | orchestrator | 2026-03-19 04:52:09.610789 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:52:09.610799 | orchestrator | Thursday 19 March 2026 04:52:04 +0000 (0:00:00.209) 0:15:58.111 ******** 2026-03-19 04:52:09.610810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:52:09.610822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:52:09.610833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:52:09.610844 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610854 | orchestrator | 2026-03-19 04:52:09.610865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:52:09.610876 | orchestrator | Thursday 19 March 2026 04:52:05 +0000 (0:00:00.350) 0:15:58.462 ******** 2026-03-19 04:52:09.610887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:52:09.610898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:52:09.610908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:52:09.610919 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.610930 | orchestrator | 2026-03-19 04:52:09.610940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:52:09.610951 | orchestrator | Thursday 19 March 2026 04:52:05 +0000 (0:00:00.376) 0:15:58.839 ******** 2026-03-19 04:52:09.610962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 04:52:09.610973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 04:52:09.610999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 04:52:09.611021 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.611032 | orchestrator | 2026-03-19 04:52:09.611057 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:52:09.611078 | orchestrator | Thursday 19 March 2026 04:52:05 +0000 (0:00:00.347) 0:15:59.186 ******** 2026-03-19 04:52:09.611089 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.611100 | orchestrator | 2026-03-19 04:52:09.611111 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:52:09.611122 | orchestrator | Thursday 19 March 2026 04:52:06 +0000 (0:00:00.151) 0:15:59.338 ******** 2026-03-19 04:52:09.611133 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 04:52:09.611144 | orchestrator | 2026-03-19 04:52:09.611155 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:52:09.611166 | orchestrator | Thursday 19 March 2026 04:52:06 +0000 (0:00:00.769) 0:16:00.108 ******** 2026-03-19 04:52:09.611177 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.611188 | orchestrator | 2026-03-19 04:52:09.611207 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-19 04:52:09.611218 | orchestrator | Thursday 19 March 2026 04:52:07 +0000 (0:00:00.818) 0:16:00.926 ******** 2026-03-19 04:52:09.611239 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.611250 | orchestrator | 2026-03-19 04:52:09.611261 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:52:09.611272 | orchestrator | Thursday 19 March 2026 04:52:07 +0000 (0:00:00.129) 0:16:01.055 ******** 2026-03-19 04:52:09.611283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:52:09.611295 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:52:09.611306 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:52:09.611317 | orchestrator | 2026-03-19 04:52:09.611328 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-19 04:52:09.611338 | orchestrator | Thursday 19 March 2026 04:52:08 +0000 (0:00:00.596) 0:16:01.652 ******** 2026-03-19 04:52:09.611350 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-19 04:52:09.611361 | orchestrator | 2026-03-19 04:52:09.611371 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-19 04:52:09.611382 | orchestrator | Thursday 19 March 2026 04:52:08 +0000 (0:00:00.525) 0:16:02.178 ******** 2026-03-19 04:52:09.611393 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.611404 | orchestrator | 2026-03-19 04:52:09.611415 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-19 04:52:09.611425 | orchestrator | Thursday 19 March 2026 04:52:09 +0000 (0:00:00.118) 0:16:02.296 ******** 2026-03-19 04:52:09.611436 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:09.611447 | orchestrator | 2026-03-19 04:52:09.611458 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-19 04:52:09.611469 | orchestrator | Thursday 19 March 2026 04:52:09 +0000 (0:00:00.123) 0:16:02.420 ******** 2026-03-19 04:52:09.611480 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:09.611491 | orchestrator | 2026-03-19 04:52:09.611510 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-19 04:52:54.109636 | orchestrator | Thursday 19 March 2026 04:52:09 +0000 (0:00:00.441) 0:16:02.862 ******** 2026-03-19 04:52:54.109764 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.109778 | orchestrator | 2026-03-19 04:52:54.109788 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-19 04:52:54.109796 | orchestrator | Thursday 19 March 2026 04:52:09 +0000 (0:00:00.163) 0:16:03.025 ******** 2026-03-19 04:52:54.109805 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 04:52:54.109814 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 04:52:54.109823 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 04:52:54.109831 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 04:52:54.109840 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 04:52:54.109848 | orchestrator | 2026-03-19 04:52:54.109856 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-19 04:52:54.109864 | orchestrator | Thursday 19 March 2026 04:52:11 +0000 (0:00:02.064) 0:16:05.089 ******** 2026-03-19 04:52:54.109872 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.109881 | orchestrator | 2026-03-19 04:52:54.109889 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-19 04:52:54.109897 | orchestrator | Thursday 19 March 2026 04:52:11 +0000 (0:00:00.127) 0:16:05.216 ******** 2026-03-19 04:52:54.109905 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-19 04:52:54.109913 | orchestrator | 2026-03-19 04:52:54.109921 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-19 04:52:54.109930 | orchestrator | Thursday 19 March 2026 04:52:12 +0000 (0:00:00.831) 0:16:06.048 ******** 2026-03-19 04:52:54.109959 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 04:52:54.109968 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-19 04:52:54.109976 | orchestrator | 2026-03-19 04:52:54.109985 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-19 04:52:54.109993 | orchestrator | Thursday 19 March 2026 04:52:13 +0000 (0:00:00.916) 0:16:06.965 ******** 2026-03-19 04:52:54.110001 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:52:54.110009 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 04:52:54.110064 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 04:52:54.110073 | orchestrator | 2026-03-19 04:52:54.110081 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:52:54.110089 | orchestrator | Thursday 19 March 2026 04:52:15 +0000 (0:00:02.246) 0:16:09.211 ******** 2026-03-19 04:52:54.110097 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-19 04:52:54.110105 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 04:52:54.110113 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110121 | orchestrator | 2026-03-19 04:52:54.110129 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-19 04:52:54.110137 | orchestrator | Thursday 19 March 2026 04:52:16 +0000 (0:00:00.981) 0:16:10.192 ******** 2026-03-19 04:52:54.110145 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110153 | orchestrator | 2026-03-19 04:52:54.110161 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-19 04:52:54.110169 | orchestrator | Thursday 19 March 2026 04:52:17 +0000 (0:00:00.208) 0:16:10.401 ******** 2026-03-19 04:52:54.110177 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110185 | orchestrator | 2026-03-19 04:52:54.110208 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-19 04:52:54.110218 | orchestrator | Thursday 19 March 2026 04:52:17 +0000 (0:00:00.096) 0:16:10.497 ******** 2026-03-19 04:52:54.110227 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110236 | orchestrator | 2026-03-19 04:52:54.110246 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-19 04:52:54.110255 | orchestrator | Thursday 19 March 2026 04:52:17 +0000 (0:00:00.125) 0:16:10.623 ******** 2026-03-19 04:52:54.110264 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-19 04:52:54.110273 | orchestrator | 2026-03-19 04:52:54.110283 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-19 04:52:54.110292 | orchestrator | Thursday 19 March 2026 04:52:17 +0000 (0:00:00.581) 0:16:11.205 ******** 2026-03-19 04:52:54.110301 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110310 | orchestrator | 2026-03-19 04:52:54.110320 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-19 04:52:54.110329 | orchestrator | Thursday 19 March 2026 04:52:18 +0000 (0:00:00.462) 0:16:11.667 ******** 2026-03-19 04:52:54.110339 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110348 | orchestrator | 2026-03-19 04:52:54.110357 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-19 04:52:54.110366 | orchestrator | Thursday 19 March 2026 04:52:21 +0000 (0:00:02.624) 0:16:14.292 ******** 2026-03-19 04:52:54.110375 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-19 04:52:54.110384 | orchestrator | 2026-03-19 04:52:54.110393 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-19 04:52:54.110402 | orchestrator | Thursday 19 March 2026 04:52:21 +0000 (0:00:00.559) 0:16:14.852 ******** 2026-03-19 04:52:54.110411 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110420 | orchestrator | 2026-03-19 04:52:54.110430 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-19 04:52:54.110439 | orchestrator | Thursday 19 March 2026 04:52:22 +0000 (0:00:01.120) 0:16:15.972 ******** 2026-03-19 04:52:54.110448 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110464 | orchestrator | 2026-03-19 04:52:54.110473 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-19 04:52:54.110496 | orchestrator | Thursday 19 March 2026 04:52:23 +0000 (0:00:00.983) 0:16:16.956 ******** 2026-03-19 04:52:54.110506 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:52:54.110514 | orchestrator | 2026-03-19 04:52:54.110524 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-19 04:52:54.110532 | orchestrator | Thursday 19 March 2026 04:52:25 +0000 (0:00:01.359) 0:16:18.316 ******** 2026-03-19 04:52:54.110541 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110550 | orchestrator | 2026-03-19 04:52:54.110558 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-19 04:52:54.110566 | orchestrator | Thursday 19 March 2026 04:52:25 +0000 (0:00:00.147) 0:16:18.463 ******** 2026-03-19 04:52:54.110574 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110582 | orchestrator | 2026-03-19 04:52:54.110590 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-19 04:52:54.110598 | orchestrator | Thursday 19 March 2026 04:52:25 +0000 (0:00:00.133) 0:16:18.597 ******** 2026-03-19 04:52:54.110606 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 04:52:54.110614 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-19 04:52:54.110622 | orchestrator | 2026-03-19 04:52:54.110630 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-19 04:52:54.110638 | orchestrator | Thursday 19 March 2026 04:52:26 +0000 (0:00:00.848) 0:16:19.445 ******** 2026-03-19 04:52:54.110646 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 04:52:54.110654 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-19 04:52:54.110662 | orchestrator | 2026-03-19 04:52:54.110670 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-19 04:52:54.110678 | orchestrator | Thursday 19 March 2026 04:52:28 +0000 (0:00:01.868) 0:16:21.313 ******** 2026-03-19 04:52:54.110686 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-19 04:52:54.110695 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-19 04:52:54.110719 | orchestrator | 2026-03-19 04:52:54.110728 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-19 04:52:54.110736 | orchestrator | Thursday 19 March 2026 04:52:31 +0000 (0:00:03.810) 0:16:25.124 ******** 2026-03-19 04:52:54.110744 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110752 | orchestrator | 2026-03-19 04:52:54.110760 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-19 04:52:54.110768 | orchestrator | Thursday 19 March 2026 04:52:32 +0000 (0:00:00.242) 0:16:25.367 ******** 2026-03-19 04:52:54.110776 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110784 | orchestrator | 2026-03-19 04:52:54.110792 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-19 04:52:54.110800 | orchestrator | Thursday 19 March 2026 04:52:32 +0000 (0:00:00.232) 0:16:25.599 ******** 2026-03-19 04:52:54.110808 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110816 | orchestrator | 2026-03-19 04:52:54.110824 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-19 04:52:54.110832 | orchestrator | Thursday 19 March 2026 04:52:32 +0000 (0:00:00.292) 0:16:25.892 ******** 2026-03-19 04:52:54.110840 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110848 | orchestrator | 2026-03-19 04:52:54.110856 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-19 04:52:54.110864 | orchestrator | Thursday 19 March 2026 04:52:32 +0000 (0:00:00.109) 0:16:26.001 ******** 2026-03-19 04:52:54.110872 | orchestrator | skipping: [testbed-node-3] 2026-03-19 04:52:54.110880 | orchestrator | 2026-03-19 04:52:54.110888 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-19 04:52:54.110896 | orchestrator | Thursday 19 March 2026 04:52:33 +0000 (0:00:00.403) 0:16:26.405 ******** 2026-03-19 04:52:54.110909 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-19 04:52:54.110923 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-19 04:52:54.110931 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-19 04:52:54.110939 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-19 04:52:54.110947 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-19 04:52:54.110955 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-03-19 04:52:54.110963 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:52:54.110971 | orchestrator | 2026-03-19 04:52:54.110979 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-19 04:52:54.110987 | orchestrator | 2026-03-19 04:52:54.110995 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:52:54.111003 | orchestrator | Thursday 19 March 2026 04:52:52 +0000 (0:00:19.686) 0:16:46.091 ******** 2026-03-19 04:52:54.111011 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-19 04:52:54.111019 | orchestrator | 2026-03-19 04:52:54.111027 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:52:54.111035 | orchestrator | Thursday 19 March 2026 04:52:53 +0000 (0:00:00.238) 0:16:46.330 ******** 2026-03-19 04:52:54.111043 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:52:54.111051 | orchestrator | 2026-03-19 04:52:54.111059 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:52:54.111067 | orchestrator | Thursday 19 March 2026 04:52:53 +0000 (0:00:00.436) 0:16:46.766 ******** 2026-03-19 04:52:54.111075 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:52:54.111083 | orchestrator | 2026-03-19 04:52:54.111091 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:52:54.111099 | orchestrator | Thursday 19 March 2026 04:52:53 +0000 (0:00:00.135) 0:16:46.902 ******** 2026-03-19 04:52:54.111107 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:52:54.111116 | orchestrator | 2026-03-19 04:52:54.111129 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:53:01.374803 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.457) 0:16:47.359 ******** 2026-03-19 04:53:01.374881 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.374887 | orchestrator | 2026-03-19 04:53:01.374892 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:53:01.374897 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.149) 0:16:47.509 ******** 2026-03-19 04:53:01.374901 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.374905 | orchestrator | 2026-03-19 04:53:01.374909 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:53:01.374913 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.140) 0:16:47.650 ******** 2026-03-19 04:53:01.374917 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.374921 | orchestrator | 2026-03-19 04:53:01.374925 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:53:01.374930 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.149) 0:16:47.799 ******** 2026-03-19 04:53:01.374934 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:01.374939 | orchestrator | 2026-03-19 04:53:01.374943 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:53:01.374947 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.141) 0:16:47.941 ******** 2026-03-19 04:53:01.374951 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.374954 | orchestrator | 2026-03-19 04:53:01.374958 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:53:01.374962 | orchestrator | Thursday 19 March 2026 04:52:54 +0000 (0:00:00.127) 0:16:48.069 ******** 2026-03-19 04:53:01.374984 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:53:01.374988 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:53:01.374992 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:53:01.374996 | orchestrator | 2026-03-19 04:53:01.375000 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:53:01.375003 | orchestrator | Thursday 19 March 2026 04:52:56 +0000 (0:00:01.246) 0:16:49.315 ******** 2026-03-19 04:53:01.375007 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.375011 | orchestrator | 2026-03-19 04:53:01.375015 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:53:01.375018 | orchestrator | Thursday 19 March 2026 04:52:56 +0000 (0:00:00.277) 0:16:49.592 ******** 2026-03-19 04:53:01.375022 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:53:01.375026 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:53:01.375030 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:53:01.375034 | orchestrator | 2026-03-19 04:53:01.375037 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:53:01.375041 | orchestrator | Thursday 19 March 2026 04:52:58 +0000 (0:00:01.903) 0:16:51.496 ******** 2026-03-19 04:53:01.375045 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 04:53:01.375049 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 04:53:01.375053 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 04:53:01.375058 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:01.375062 | orchestrator | 2026-03-19 04:53:01.375066 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:53:01.375086 | orchestrator | Thursday 19 March 2026 04:52:58 +0000 (0:00:00.416) 0:16:51.912 ******** 2026-03-19 04:53:01.375092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375114 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:01.375117 | orchestrator | 2026-03-19 04:53:01.375121 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:53:01.375125 | orchestrator | Thursday 19 March 2026 04:52:59 +0000 (0:00:00.615) 0:16:52.528 ******** 2026-03-19 04:53:01.375130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:01.375158 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:01.375162 | orchestrator | 2026-03-19 04:53:01.375166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:53:01.375170 | orchestrator | Thursday 19 March 2026 04:52:59 +0000 (0:00:00.162) 0:16:52.691 ******** 2026-03-19 04:53:01.375176 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:52:56.901664', 'end': '2026-03-19 04:52:56.962697', 'delta': '0:00:00.061033', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:53:01.375183 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:52:57.491877', 'end': '2026-03-19 04:52:57.541168', 'delta': '0:00:00.049291', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:53:01.375190 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:52:58.039775', 'end': '2026-03-19 04:52:58.086096', 'delta': '0:00:00.046321', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:53:01.375194 | orchestrator | 2026-03-19 04:53:01.375198 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:53:01.375201 | orchestrator | Thursday 19 March 2026 04:52:59 +0000 (0:00:00.192) 0:16:52.884 ******** 2026-03-19 04:53:01.375205 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.375209 | orchestrator | 2026-03-19 04:53:01.375213 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:53:01.375216 | orchestrator | Thursday 19 March 2026 04:52:59 +0000 (0:00:00.257) 0:16:53.141 ******** 2026-03-19 04:53:01.375220 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:01.375224 | orchestrator | 2026-03-19 04:53:01.375228 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:53:01.375232 | orchestrator | Thursday 19 March 2026 04:53:00 +0000 (0:00:00.252) 0:16:53.394 ******** 2026-03-19 04:53:01.375236 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:01.375242 | orchestrator | 2026-03-19 04:53:01.375246 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:53:01.375250 | orchestrator | Thursday 19 March 2026 04:53:00 +0000 (0:00:00.131) 0:16:53.525 ******** 2026-03-19 04:53:01.375254 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:53:01.375258 | orchestrator | 2026-03-19 04:53:01.375261 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:53:01.375265 | orchestrator | Thursday 19 March 2026 04:53:01 +0000 (0:00:00.969) 0:16:54.494 ******** 2026-03-19 04:53:01.375271 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:03.535210 | orchestrator | 2026-03-19 04:53:03.535315 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:53:03.535331 | orchestrator | Thursday 19 March 2026 04:53:01 +0000 (0:00:00.139) 0:16:54.634 ******** 2026-03-19 04:53:03.535343 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535355 | orchestrator | 2026-03-19 04:53:03.535370 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:53:03.535395 | orchestrator | Thursday 19 March 2026 04:53:01 +0000 (0:00:00.118) 0:16:54.752 ******** 2026-03-19 04:53:03.535419 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535437 | orchestrator | 2026-03-19 04:53:03.535454 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:53:03.535473 | orchestrator | Thursday 19 March 2026 04:53:02 +0000 (0:00:00.816) 0:16:55.568 ******** 2026-03-19 04:53:03.535489 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535506 | orchestrator | 2026-03-19 04:53:03.535523 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:53:03.535541 | orchestrator | Thursday 19 March 2026 04:53:02 +0000 (0:00:00.130) 0:16:55.698 ******** 2026-03-19 04:53:03.535560 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535578 | orchestrator | 2026-03-19 04:53:03.535597 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:53:03.535616 | orchestrator | Thursday 19 March 2026 04:53:02 +0000 (0:00:00.116) 0:16:55.814 ******** 2026-03-19 04:53:03.535634 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:03.535689 | orchestrator | 2026-03-19 04:53:03.535701 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:53:03.535747 | orchestrator | Thursday 19 March 2026 04:53:02 +0000 (0:00:00.164) 0:16:55.978 ******** 2026-03-19 04:53:03.535761 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535774 | orchestrator | 2026-03-19 04:53:03.535786 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:53:03.535799 | orchestrator | Thursday 19 March 2026 04:53:02 +0000 (0:00:00.118) 0:16:56.097 ******** 2026-03-19 04:53:03.535811 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:03.535824 | orchestrator | 2026-03-19 04:53:03.535837 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:53:03.535850 | orchestrator | Thursday 19 March 2026 04:53:03 +0000 (0:00:00.176) 0:16:56.273 ******** 2026-03-19 04:53:03.535862 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.535875 | orchestrator | 2026-03-19 04:53:03.535888 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:53:03.535902 | orchestrator | Thursday 19 March 2026 04:53:03 +0000 (0:00:00.135) 0:16:56.409 ******** 2026-03-19 04:53:03.535915 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:03.535928 | orchestrator | 2026-03-19 04:53:03.535940 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:53:03.535952 | orchestrator | Thursday 19 March 2026 04:53:03 +0000 (0:00:00.159) 0:16:56.568 ******** 2026-03-19 04:53:03.535967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}})  2026-03-19 04:53:03.536047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:53:03.536082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}})  2026-03-19 04:53:03.536096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:53:03.536133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.536202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}})  2026-03-19 04:53:03.536235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}})  2026-03-19 04:53:03.867196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.867306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:53:03.867336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.867345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:53:03.867353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:53:03.867361 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:03.867369 | orchestrator | 2026-03-19 04:53:03.867388 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:53:03.867395 | orchestrator | Thursday 19 March 2026 04:53:03 +0000 (0:00:00.349) 0:16:56.918 ******** 2026-03-19 04:53:03.867402 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:03.867410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:03.867426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:03.867434 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:03.867442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:03.867453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:04.043445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:17.006189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:53:17.006289 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006301 | orchestrator | 2026-03-19 04:53:17.006310 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:53:17.006318 | orchestrator | Thursday 19 March 2026 04:53:04 +0000 (0:00:00.380) 0:16:57.299 ******** 2026-03-19 04:53:17.006325 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.006333 | orchestrator | 2026-03-19 04:53:17.006340 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:53:17.006346 | orchestrator | Thursday 19 March 2026 04:53:04 +0000 (0:00:00.481) 0:16:57.780 ******** 2026-03-19 04:53:17.006353 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.006360 | orchestrator | 2026-03-19 04:53:17.006367 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:53:17.006373 | orchestrator | Thursday 19 March 2026 04:53:04 +0000 (0:00:00.131) 0:16:57.912 ******** 2026-03-19 04:53:17.006380 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.006387 | orchestrator | 2026-03-19 04:53:17.006394 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:53:17.006400 | orchestrator | Thursday 19 March 2026 04:53:05 +0000 (0:00:00.477) 0:16:58.389 ******** 2026-03-19 04:53:17.006407 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006414 | orchestrator | 2026-03-19 04:53:17.006421 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:53:17.006428 | orchestrator | Thursday 19 March 2026 04:53:05 +0000 (0:00:00.406) 0:16:58.795 ******** 2026-03-19 04:53:17.006435 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006441 | orchestrator | 2026-03-19 04:53:17.006448 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:53:17.006466 | orchestrator | Thursday 19 March 2026 04:53:05 +0000 (0:00:00.238) 0:16:59.034 ******** 2026-03-19 04:53:17.006474 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006480 | orchestrator | 2026-03-19 04:53:17.006487 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:53:17.006502 | orchestrator | Thursday 19 March 2026 04:53:05 +0000 (0:00:00.138) 0:16:59.173 ******** 2026-03-19 04:53:17.006509 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 04:53:17.006516 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 04:53:17.006523 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 04:53:17.006530 | orchestrator | 2026-03-19 04:53:17.006537 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:53:17.006544 | orchestrator | Thursday 19 March 2026 04:53:06 +0000 (0:00:00.656) 0:16:59.829 ******** 2026-03-19 04:53:17.006568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 04:53:17.006575 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 04:53:17.006582 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 04:53:17.006589 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006596 | orchestrator | 2026-03-19 04:53:17.006602 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:53:17.006609 | orchestrator | Thursday 19 March 2026 04:53:06 +0000 (0:00:00.149) 0:16:59.979 ******** 2026-03-19 04:53:17.006616 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-19 04:53:17.006623 | orchestrator | 2026-03-19 04:53:17.006631 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:53:17.006639 | orchestrator | Thursday 19 March 2026 04:53:06 +0000 (0:00:00.195) 0:17:00.174 ******** 2026-03-19 04:53:17.006645 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006652 | orchestrator | 2026-03-19 04:53:17.006659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:53:17.006666 | orchestrator | Thursday 19 March 2026 04:53:07 +0000 (0:00:00.157) 0:17:00.331 ******** 2026-03-19 04:53:17.006672 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006679 | orchestrator | 2026-03-19 04:53:17.006686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:53:17.006693 | orchestrator | Thursday 19 March 2026 04:53:07 +0000 (0:00:00.138) 0:17:00.469 ******** 2026-03-19 04:53:17.006700 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006724 | orchestrator | 2026-03-19 04:53:17.006730 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:53:17.006737 | orchestrator | Thursday 19 March 2026 04:53:07 +0000 (0:00:00.141) 0:17:00.611 ******** 2026-03-19 04:53:17.006744 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.006751 | orchestrator | 2026-03-19 04:53:17.006757 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:53:17.006764 | orchestrator | Thursday 19 March 2026 04:53:07 +0000 (0:00:00.240) 0:17:00.852 ******** 2026-03-19 04:53:17.006771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:53:17.006791 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:53:17.006798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:53:17.006805 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006812 | orchestrator | 2026-03-19 04:53:17.006818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:53:17.006825 | orchestrator | Thursday 19 March 2026 04:53:08 +0000 (0:00:00.688) 0:17:01.540 ******** 2026-03-19 04:53:17.006832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:53:17.006838 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:53:17.006845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:53:17.006852 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006859 | orchestrator | 2026-03-19 04:53:17.006871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:53:17.006878 | orchestrator | Thursday 19 March 2026 04:53:08 +0000 (0:00:00.662) 0:17:02.203 ******** 2026-03-19 04:53:17.006885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:53:17.006891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:53:17.006898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:53:17.006905 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:17.006912 | orchestrator | 2026-03-19 04:53:17.006918 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:53:17.006925 | orchestrator | Thursday 19 March 2026 04:53:09 +0000 (0:00:00.946) 0:17:03.150 ******** 2026-03-19 04:53:17.006938 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.006945 | orchestrator | 2026-03-19 04:53:17.006952 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:53:17.006959 | orchestrator | Thursday 19 March 2026 04:53:10 +0000 (0:00:00.155) 0:17:03.306 ******** 2026-03-19 04:53:17.006966 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 04:53:17.006972 | orchestrator | 2026-03-19 04:53:17.006979 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:53:17.006986 | orchestrator | Thursday 19 March 2026 04:53:10 +0000 (0:00:00.369) 0:17:03.675 ******** 2026-03-19 04:53:17.006993 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:53:17.006999 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:53:17.007006 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:53:17.007013 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:53:17.007019 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 04:53:17.007031 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:53:17.007041 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:53:17.007051 | orchestrator | 2026-03-19 04:53:17.007069 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:53:17.007081 | orchestrator | Thursday 19 March 2026 04:53:11 +0000 (0:00:00.762) 0:17:04.438 ******** 2026-03-19 04:53:17.007091 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:53:17.007102 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:53:17.007111 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:53:17.007121 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:53:17.007132 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 04:53:17.007143 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:53:17.007153 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:53:17.007164 | orchestrator | 2026-03-19 04:53:17.007175 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-19 04:53:17.007185 | orchestrator | Thursday 19 March 2026 04:53:12 +0000 (0:00:01.581) 0:17:06.019 ******** 2026-03-19 04:53:17.007196 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.007206 | orchestrator | 2026-03-19 04:53:17.007216 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-19 04:53:17.007226 | orchestrator | Thursday 19 March 2026 04:53:13 +0000 (0:00:00.476) 0:17:06.496 ******** 2026-03-19 04:53:17.007236 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.007246 | orchestrator | 2026-03-19 04:53:17.007257 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-19 04:53:17.007268 | orchestrator | Thursday 19 March 2026 04:53:13 +0000 (0:00:00.123) 0:17:06.619 ******** 2026-03-19 04:53:17.007279 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:17.007289 | orchestrator | 2026-03-19 04:53:17.007300 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-19 04:53:17.007311 | orchestrator | Thursday 19 March 2026 04:53:13 +0000 (0:00:00.231) 0:17:06.850 ******** 2026-03-19 04:53:17.007323 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-19 04:53:17.007333 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-19 04:53:17.007344 | orchestrator | 2026-03-19 04:53:17.007355 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:53:17.007365 | orchestrator | Thursday 19 March 2026 04:53:16 +0000 (0:00:03.196) 0:17:10.047 ******** 2026-03-19 04:53:17.007387 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-19 04:53:17.007398 | orchestrator | 2026-03-19 04:53:17.007408 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:53:17.007430 | orchestrator | Thursday 19 March 2026 04:53:16 +0000 (0:00:00.210) 0:17:10.258 ******** 2026-03-19 04:53:28.493035 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-19 04:53:28.493142 | orchestrator | 2026-03-19 04:53:28.493159 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:53:28.493170 | orchestrator | Thursday 19 March 2026 04:53:17 +0000 (0:00:00.453) 0:17:10.711 ******** 2026-03-19 04:53:28.493181 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493191 | orchestrator | 2026-03-19 04:53:28.493201 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:53:28.493211 | orchestrator | Thursday 19 March 2026 04:53:17 +0000 (0:00:00.123) 0:17:10.835 ******** 2026-03-19 04:53:28.493221 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493231 | orchestrator | 2026-03-19 04:53:28.493241 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:53:28.493266 | orchestrator | Thursday 19 March 2026 04:53:18 +0000 (0:00:00.501) 0:17:11.336 ******** 2026-03-19 04:53:28.493276 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493286 | orchestrator | 2026-03-19 04:53:28.493295 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:53:28.493305 | orchestrator | Thursday 19 March 2026 04:53:18 +0000 (0:00:00.550) 0:17:11.886 ******** 2026-03-19 04:53:28.493315 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493324 | orchestrator | 2026-03-19 04:53:28.493333 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:53:28.493343 | orchestrator | Thursday 19 March 2026 04:53:19 +0000 (0:00:00.523) 0:17:12.410 ******** 2026-03-19 04:53:28.493352 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493362 | orchestrator | 2026-03-19 04:53:28.493371 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:53:28.493381 | orchestrator | Thursday 19 March 2026 04:53:19 +0000 (0:00:00.132) 0:17:12.542 ******** 2026-03-19 04:53:28.493391 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493400 | orchestrator | 2026-03-19 04:53:28.493409 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:53:28.493419 | orchestrator | Thursday 19 March 2026 04:53:19 +0000 (0:00:00.120) 0:17:12.663 ******** 2026-03-19 04:53:28.493428 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493438 | orchestrator | 2026-03-19 04:53:28.493447 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:53:28.493457 | orchestrator | Thursday 19 March 2026 04:53:19 +0000 (0:00:00.120) 0:17:12.784 ******** 2026-03-19 04:53:28.493466 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493476 | orchestrator | 2026-03-19 04:53:28.493485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:53:28.493495 | orchestrator | Thursday 19 March 2026 04:53:20 +0000 (0:00:00.549) 0:17:13.333 ******** 2026-03-19 04:53:28.493505 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493514 | orchestrator | 2026-03-19 04:53:28.493524 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:53:28.493533 | orchestrator | Thursday 19 March 2026 04:53:20 +0000 (0:00:00.512) 0:17:13.846 ******** 2026-03-19 04:53:28.493543 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493554 | orchestrator | 2026-03-19 04:53:28.493564 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:53:28.493576 | orchestrator | Thursday 19 March 2026 04:53:20 +0000 (0:00:00.131) 0:17:13.978 ******** 2026-03-19 04:53:28.493586 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493597 | orchestrator | 2026-03-19 04:53:28.493607 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:53:28.493639 | orchestrator | Thursday 19 March 2026 04:53:20 +0000 (0:00:00.130) 0:17:14.108 ******** 2026-03-19 04:53:28.493650 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493661 | orchestrator | 2026-03-19 04:53:28.493671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:53:28.493683 | orchestrator | Thursday 19 March 2026 04:53:20 +0000 (0:00:00.146) 0:17:14.255 ******** 2026-03-19 04:53:28.493693 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493722 | orchestrator | 2026-03-19 04:53:28.493734 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:53:28.493744 | orchestrator | Thursday 19 March 2026 04:53:21 +0000 (0:00:00.135) 0:17:14.390 ******** 2026-03-19 04:53:28.493755 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493767 | orchestrator | 2026-03-19 04:53:28.493778 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:53:28.493789 | orchestrator | Thursday 19 March 2026 04:53:21 +0000 (0:00:00.412) 0:17:14.803 ******** 2026-03-19 04:53:28.493799 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493810 | orchestrator | 2026-03-19 04:53:28.493821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:53:28.493832 | orchestrator | Thursday 19 March 2026 04:53:21 +0000 (0:00:00.121) 0:17:14.924 ******** 2026-03-19 04:53:28.493842 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493852 | orchestrator | 2026-03-19 04:53:28.493863 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:53:28.493874 | orchestrator | Thursday 19 March 2026 04:53:21 +0000 (0:00:00.134) 0:17:15.059 ******** 2026-03-19 04:53:28.493885 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.493896 | orchestrator | 2026-03-19 04:53:28.493907 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:53:28.493917 | orchestrator | Thursday 19 March 2026 04:53:21 +0000 (0:00:00.132) 0:17:15.192 ******** 2026-03-19 04:53:28.493926 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493935 | orchestrator | 2026-03-19 04:53:28.493945 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:53:28.493954 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.162) 0:17:15.355 ******** 2026-03-19 04:53:28.493964 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.493973 | orchestrator | 2026-03-19 04:53:28.493982 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:53:28.493992 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.218) 0:17:15.573 ******** 2026-03-19 04:53:28.494001 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494011 | orchestrator | 2026-03-19 04:53:28.494113 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:53:28.494132 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.127) 0:17:15.701 ******** 2026-03-19 04:53:28.494149 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494163 | orchestrator | 2026-03-19 04:53:28.494181 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:53:28.494196 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.131) 0:17:15.833 ******** 2026-03-19 04:53:28.494214 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494231 | orchestrator | 2026-03-19 04:53:28.494248 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:53:28.494264 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.119) 0:17:15.952 ******** 2026-03-19 04:53:28.494280 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494290 | orchestrator | 2026-03-19 04:53:28.494308 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:53:28.494318 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.127) 0:17:16.080 ******** 2026-03-19 04:53:28.494327 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494337 | orchestrator | 2026-03-19 04:53:28.494346 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:53:28.494366 | orchestrator | Thursday 19 March 2026 04:53:22 +0000 (0:00:00.129) 0:17:16.209 ******** 2026-03-19 04:53:28.494376 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494385 | orchestrator | 2026-03-19 04:53:28.494395 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:53:28.494404 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.128) 0:17:16.338 ******** 2026-03-19 04:53:28.494414 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494423 | orchestrator | 2026-03-19 04:53:28.494432 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:53:28.494442 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.395) 0:17:16.733 ******** 2026-03-19 04:53:28.494452 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494461 | orchestrator | 2026-03-19 04:53:28.494470 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:53:28.494480 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.130) 0:17:16.863 ******** 2026-03-19 04:53:28.494489 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494498 | orchestrator | 2026-03-19 04:53:28.494508 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:53:28.494517 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.132) 0:17:16.996 ******** 2026-03-19 04:53:28.494527 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494536 | orchestrator | 2026-03-19 04:53:28.494545 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:53:28.494555 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.124) 0:17:17.121 ******** 2026-03-19 04:53:28.494564 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494574 | orchestrator | 2026-03-19 04:53:28.494583 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:53:28.494593 | orchestrator | Thursday 19 March 2026 04:53:23 +0000 (0:00:00.138) 0:17:17.259 ******** 2026-03-19 04:53:28.494602 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494611 | orchestrator | 2026-03-19 04:53:28.494621 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:53:28.494630 | orchestrator | Thursday 19 March 2026 04:53:24 +0000 (0:00:00.217) 0:17:17.477 ******** 2026-03-19 04:53:28.494640 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.494649 | orchestrator | 2026-03-19 04:53:28.494659 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:53:28.494668 | orchestrator | Thursday 19 March 2026 04:53:25 +0000 (0:00:00.922) 0:17:18.400 ******** 2026-03-19 04:53:28.494677 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.494687 | orchestrator | 2026-03-19 04:53:28.494696 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:53:28.494752 | orchestrator | Thursday 19 March 2026 04:53:26 +0000 (0:00:01.198) 0:17:19.598 ******** 2026-03-19 04:53:28.494769 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-19 04:53:28.494786 | orchestrator | 2026-03-19 04:53:28.494800 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:53:28.494815 | orchestrator | Thursday 19 March 2026 04:53:26 +0000 (0:00:00.190) 0:17:19.789 ******** 2026-03-19 04:53:28.494832 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494849 | orchestrator | 2026-03-19 04:53:28.494867 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:53:28.494884 | orchestrator | Thursday 19 March 2026 04:53:26 +0000 (0:00:00.135) 0:17:19.924 ******** 2026-03-19 04:53:28.494901 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.494911 | orchestrator | 2026-03-19 04:53:28.494921 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:53:28.494930 | orchestrator | Thursday 19 March 2026 04:53:26 +0000 (0:00:00.119) 0:17:20.044 ******** 2026-03-19 04:53:28.494939 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:53:28.494957 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:53:28.494967 | orchestrator | 2026-03-19 04:53:28.494976 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:53:28.494985 | orchestrator | Thursday 19 March 2026 04:53:27 +0000 (0:00:01.101) 0:17:21.145 ******** 2026-03-19 04:53:28.494995 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:28.495004 | orchestrator | 2026-03-19 04:53:28.495013 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:53:28.495022 | orchestrator | Thursday 19 March 2026 04:53:28 +0000 (0:00:00.459) 0:17:21.605 ******** 2026-03-19 04:53:28.495032 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:28.495041 | orchestrator | 2026-03-19 04:53:28.495051 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:53:28.495070 | orchestrator | Thursday 19 March 2026 04:53:28 +0000 (0:00:00.140) 0:17:21.745 ******** 2026-03-19 04:53:43.083318 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083400 | orchestrator | 2026-03-19 04:53:43.083408 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:53:43.083414 | orchestrator | Thursday 19 March 2026 04:53:28 +0000 (0:00:00.159) 0:17:21.905 ******** 2026-03-19 04:53:43.083418 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083422 | orchestrator | 2026-03-19 04:53:43.083426 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:53:43.083431 | orchestrator | Thursday 19 March 2026 04:53:28 +0000 (0:00:00.131) 0:17:22.036 ******** 2026-03-19 04:53:43.083435 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-19 04:53:43.083440 | orchestrator | 2026-03-19 04:53:43.083456 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:53:43.083460 | orchestrator | Thursday 19 March 2026 04:53:29 +0000 (0:00:00.236) 0:17:22.273 ******** 2026-03-19 04:53:43.083464 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:43.083468 | orchestrator | 2026-03-19 04:53:43.083472 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:53:43.083477 | orchestrator | Thursday 19 March 2026 04:53:29 +0000 (0:00:00.695) 0:17:22.968 ******** 2026-03-19 04:53:43.083481 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:53:43.083485 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:53:43.083488 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:53:43.083492 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083496 | orchestrator | 2026-03-19 04:53:43.083500 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:53:43.083504 | orchestrator | Thursday 19 March 2026 04:53:29 +0000 (0:00:00.140) 0:17:23.109 ******** 2026-03-19 04:53:43.083507 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083511 | orchestrator | 2026-03-19 04:53:43.083515 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:53:43.083519 | orchestrator | Thursday 19 March 2026 04:53:29 +0000 (0:00:00.136) 0:17:23.245 ******** 2026-03-19 04:53:43.083523 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083526 | orchestrator | 2026-03-19 04:53:43.083530 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:53:43.083534 | orchestrator | Thursday 19 March 2026 04:53:30 +0000 (0:00:00.156) 0:17:23.401 ******** 2026-03-19 04:53:43.083538 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083542 | orchestrator | 2026-03-19 04:53:43.083545 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:53:43.083549 | orchestrator | Thursday 19 March 2026 04:53:30 +0000 (0:00:00.153) 0:17:23.555 ******** 2026-03-19 04:53:43.083555 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083562 | orchestrator | 2026-03-19 04:53:43.083568 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:53:43.083595 | orchestrator | Thursday 19 March 2026 04:53:30 +0000 (0:00:00.139) 0:17:23.694 ******** 2026-03-19 04:53:43.083602 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083607 | orchestrator | 2026-03-19 04:53:43.083613 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:53:43.083619 | orchestrator | Thursday 19 March 2026 04:53:30 +0000 (0:00:00.397) 0:17:24.091 ******** 2026-03-19 04:53:43.083624 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:43.083630 | orchestrator | 2026-03-19 04:53:43.083635 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:53:43.083641 | orchestrator | Thursday 19 March 2026 04:53:32 +0000 (0:00:01.562) 0:17:25.654 ******** 2026-03-19 04:53:43.083647 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:43.083653 | orchestrator | 2026-03-19 04:53:43.083658 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:53:43.083663 | orchestrator | Thursday 19 March 2026 04:53:32 +0000 (0:00:00.149) 0:17:25.803 ******** 2026-03-19 04:53:43.083668 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-19 04:53:43.083674 | orchestrator | 2026-03-19 04:53:43.083680 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:53:43.083686 | orchestrator | Thursday 19 March 2026 04:53:32 +0000 (0:00:00.211) 0:17:26.015 ******** 2026-03-19 04:53:43.083691 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083697 | orchestrator | 2026-03-19 04:53:43.083742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:53:43.083751 | orchestrator | Thursday 19 March 2026 04:53:32 +0000 (0:00:00.151) 0:17:26.167 ******** 2026-03-19 04:53:43.083757 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083763 | orchestrator | 2026-03-19 04:53:43.083770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:53:43.083774 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.153) 0:17:26.320 ******** 2026-03-19 04:53:43.083778 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083782 | orchestrator | 2026-03-19 04:53:43.083785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:53:43.083789 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.154) 0:17:26.474 ******** 2026-03-19 04:53:43.083793 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083797 | orchestrator | 2026-03-19 04:53:43.083811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:53:43.083815 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.142) 0:17:26.617 ******** 2026-03-19 04:53:43.083825 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083829 | orchestrator | 2026-03-19 04:53:43.083832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:53:43.083836 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.144) 0:17:26.762 ******** 2026-03-19 04:53:43.083840 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083844 | orchestrator | 2026-03-19 04:53:43.083859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:53:43.083863 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.149) 0:17:26.911 ******** 2026-03-19 04:53:43.083867 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083871 | orchestrator | 2026-03-19 04:53:43.083874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:53:43.083878 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.163) 0:17:27.075 ******** 2026-03-19 04:53:43.083882 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.083886 | orchestrator | 2026-03-19 04:53:43.083890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:53:43.083895 | orchestrator | Thursday 19 March 2026 04:53:33 +0000 (0:00:00.146) 0:17:27.221 ******** 2026-03-19 04:53:43.083899 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:53:43.083909 | orchestrator | 2026-03-19 04:53:43.083919 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:53:43.083924 | orchestrator | Thursday 19 March 2026 04:53:34 +0000 (0:00:00.468) 0:17:27.689 ******** 2026-03-19 04:53:43.083928 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-19 04:53:43.083933 | orchestrator | 2026-03-19 04:53:43.083937 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:53:43.083942 | orchestrator | Thursday 19 March 2026 04:53:34 +0000 (0:00:00.212) 0:17:27.902 ******** 2026-03-19 04:53:43.083947 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-19 04:53:43.083954 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-19 04:53:43.083960 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-19 04:53:43.083967 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-19 04:53:43.083973 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-19 04:53:43.083980 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-19 04:53:43.083987 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-19 04:53:43.083994 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:53:43.084001 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:53:43.084008 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:53:43.084013 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:53:43.084017 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:53:43.084023 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:53:43.084030 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:53:43.084037 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-19 04:53:43.084044 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-19 04:53:43.084050 | orchestrator | 2026-03-19 04:53:43.084057 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:53:43.084062 | orchestrator | Thursday 19 March 2026 04:53:40 +0000 (0:00:05.741) 0:17:33.644 ******** 2026-03-19 04:53:43.084067 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-19 04:53:43.084071 | orchestrator | 2026-03-19 04:53:43.084075 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 04:53:43.084080 | orchestrator | Thursday 19 March 2026 04:53:40 +0000 (0:00:00.223) 0:17:33.867 ******** 2026-03-19 04:53:43.084085 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 04:53:43.084090 | orchestrator | 2026-03-19 04:53:43.084095 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 04:53:43.084099 | orchestrator | Thursday 19 March 2026 04:53:41 +0000 (0:00:00.510) 0:17:34.378 ******** 2026-03-19 04:53:43.084104 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 04:53:43.084108 | orchestrator | 2026-03-19 04:53:43.084113 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:53:43.084117 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.953) 0:17:35.332 ******** 2026-03-19 04:53:43.084123 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084129 | orchestrator | 2026-03-19 04:53:43.084136 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:53:43.084143 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.122) 0:17:35.454 ******** 2026-03-19 04:53:43.084149 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084156 | orchestrator | 2026-03-19 04:53:43.084162 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:53:43.084173 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.130) 0:17:35.584 ******** 2026-03-19 04:53:43.084179 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084185 | orchestrator | 2026-03-19 04:53:43.084191 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:53:43.084198 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.123) 0:17:35.708 ******** 2026-03-19 04:53:43.084205 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084211 | orchestrator | 2026-03-19 04:53:43.084218 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:53:43.084224 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.111) 0:17:35.819 ******** 2026-03-19 04:53:43.084231 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084238 | orchestrator | 2026-03-19 04:53:43.084244 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:53:43.084250 | orchestrator | Thursday 19 March 2026 04:53:42 +0000 (0:00:00.123) 0:17:35.943 ******** 2026-03-19 04:53:43.084257 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:53:43.084264 | orchestrator | 2026-03-19 04:53:43.084272 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:54:04.249815 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.390) 0:17:36.333 ******** 2026-03-19 04:54:04.249913 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.249926 | orchestrator | 2026-03-19 04:54:04.249935 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:54:04.249944 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.139) 0:17:36.473 ******** 2026-03-19 04:54:04.249952 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.249959 | orchestrator | 2026-03-19 04:54:04.249967 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:54:04.249987 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.135) 0:17:36.608 ******** 2026-03-19 04:54:04.249995 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250002 | orchestrator | 2026-03-19 04:54:04.250009 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:54:04.250065 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.139) 0:17:36.747 ******** 2026-03-19 04:54:04.250076 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250083 | orchestrator | 2026-03-19 04:54:04.250090 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:54:04.250098 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.129) 0:17:36.877 ******** 2026-03-19 04:54:04.250105 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250114 | orchestrator | 2026-03-19 04:54:04.250121 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:54:04.250129 | orchestrator | Thursday 19 March 2026 04:53:43 +0000 (0:00:00.212) 0:17:37.090 ******** 2026-03-19 04:54:04.250136 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-19 04:54:04.250143 | orchestrator | 2026-03-19 04:54:04.250151 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:54:04.250158 | orchestrator | Thursday 19 March 2026 04:53:47 +0000 (0:00:03.671) 0:17:40.761 ******** 2026-03-19 04:54:04.250166 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 04:54:04.250174 | orchestrator | 2026-03-19 04:54:04.250182 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:54:04.250189 | orchestrator | Thursday 19 March 2026 04:53:47 +0000 (0:00:00.177) 0:17:40.938 ******** 2026-03-19 04:54:04.250199 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-19 04:54:04.250230 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-19 04:54:04.250240 | orchestrator | 2026-03-19 04:54:04.250247 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:54:04.250254 | orchestrator | Thursday 19 March 2026 04:53:54 +0000 (0:00:06.998) 0:17:47.937 ******** 2026-03-19 04:54:04.250262 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250269 | orchestrator | 2026-03-19 04:54:04.250276 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:54:04.250284 | orchestrator | Thursday 19 March 2026 04:53:54 +0000 (0:00:00.135) 0:17:48.073 ******** 2026-03-19 04:54:04.250291 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250298 | orchestrator | 2026-03-19 04:54:04.250307 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:54:04.250316 | orchestrator | Thursday 19 March 2026 04:53:54 +0000 (0:00:00.129) 0:17:48.202 ******** 2026-03-19 04:54:04.250324 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250332 | orchestrator | 2026-03-19 04:54:04.250341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:54:04.250349 | orchestrator | Thursday 19 March 2026 04:53:55 +0000 (0:00:00.171) 0:17:48.374 ******** 2026-03-19 04:54:04.250358 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250366 | orchestrator | 2026-03-19 04:54:04.250375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:54:04.250384 | orchestrator | Thursday 19 March 2026 04:53:55 +0000 (0:00:00.160) 0:17:48.535 ******** 2026-03-19 04:54:04.250392 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250401 | orchestrator | 2026-03-19 04:54:04.250408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:54:04.250415 | orchestrator | Thursday 19 March 2026 04:53:55 +0000 (0:00:00.416) 0:17:48.951 ******** 2026-03-19 04:54:04.250422 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250430 | orchestrator | 2026-03-19 04:54:04.250437 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:54:04.250444 | orchestrator | Thursday 19 March 2026 04:53:55 +0000 (0:00:00.255) 0:17:49.207 ******** 2026-03-19 04:54:04.250452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:54:04.250459 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:54:04.250466 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:54:04.250474 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250481 | orchestrator | 2026-03-19 04:54:04.250489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:54:04.250511 | orchestrator | Thursday 19 March 2026 04:53:56 +0000 (0:00:00.405) 0:17:49.613 ******** 2026-03-19 04:54:04.250519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:54:04.250526 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:54:04.250534 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:54:04.250541 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250548 | orchestrator | 2026-03-19 04:54:04.250555 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:54:04.250562 | orchestrator | Thursday 19 March 2026 04:53:56 +0000 (0:00:00.402) 0:17:50.015 ******** 2026-03-19 04:54:04.250575 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 04:54:04.250582 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 04:54:04.250589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 04:54:04.250602 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250610 | orchestrator | 2026-03-19 04:54:04.250617 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:54:04.250624 | orchestrator | Thursday 19 March 2026 04:53:57 +0000 (0:00:00.430) 0:17:50.446 ******** 2026-03-19 04:54:04.250631 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250638 | orchestrator | 2026-03-19 04:54:04.250646 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:54:04.250653 | orchestrator | Thursday 19 March 2026 04:53:57 +0000 (0:00:00.156) 0:17:50.603 ******** 2026-03-19 04:54:04.250660 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 04:54:04.250667 | orchestrator | 2026-03-19 04:54:04.250674 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:54:04.250682 | orchestrator | Thursday 19 March 2026 04:53:57 +0000 (0:00:00.414) 0:17:51.017 ******** 2026-03-19 04:54:04.250689 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250696 | orchestrator | 2026-03-19 04:54:04.250703 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-19 04:54:04.250725 | orchestrator | Thursday 19 March 2026 04:53:58 +0000 (0:00:00.835) 0:17:51.853 ******** 2026-03-19 04:54:04.250732 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250739 | orchestrator | 2026-03-19 04:54:04.250747 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:54:04.250754 | orchestrator | Thursday 19 March 2026 04:53:58 +0000 (0:00:00.141) 0:17:51.994 ******** 2026-03-19 04:54:04.250761 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:54:04.250769 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:54:04.250777 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:54:04.250784 | orchestrator | 2026-03-19 04:54:04.250791 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-19 04:54:04.250799 | orchestrator | Thursday 19 March 2026 04:53:59 +0000 (0:00:00.964) 0:17:52.959 ******** 2026-03-19 04:54:04.250806 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-19 04:54:04.250813 | orchestrator | 2026-03-19 04:54:04.250820 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-19 04:54:04.250828 | orchestrator | Thursday 19 March 2026 04:54:00 +0000 (0:00:00.445) 0:17:53.404 ******** 2026-03-19 04:54:04.250835 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250842 | orchestrator | 2026-03-19 04:54:04.250850 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-19 04:54:04.250857 | orchestrator | Thursday 19 March 2026 04:54:00 +0000 (0:00:00.132) 0:17:53.536 ******** 2026-03-19 04:54:04.250864 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.250873 | orchestrator | 2026-03-19 04:54:04.250885 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-19 04:54:04.250896 | orchestrator | Thursday 19 March 2026 04:54:00 +0000 (0:00:00.137) 0:17:53.673 ******** 2026-03-19 04:54:04.250909 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250921 | orchestrator | 2026-03-19 04:54:04.250933 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-19 04:54:04.250944 | orchestrator | Thursday 19 March 2026 04:54:00 +0000 (0:00:00.447) 0:17:54.120 ******** 2026-03-19 04:54:04.250956 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:04.250967 | orchestrator | 2026-03-19 04:54:04.250978 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-19 04:54:04.250990 | orchestrator | Thursday 19 March 2026 04:54:01 +0000 (0:00:00.170) 0:17:54.291 ******** 2026-03-19 04:54:04.251002 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 04:54:04.251014 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 04:54:04.251035 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 04:54:04.251046 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 04:54:04.251058 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 04:54:04.251070 | orchestrator | 2026-03-19 04:54:04.251082 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-19 04:54:04.251094 | orchestrator | Thursday 19 March 2026 04:54:03 +0000 (0:00:02.884) 0:17:57.176 ******** 2026-03-19 04:54:04.251106 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:04.251118 | orchestrator | 2026-03-19 04:54:04.251130 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-19 04:54:04.251143 | orchestrator | Thursday 19 March 2026 04:54:04 +0000 (0:00:00.123) 0:17:57.300 ******** 2026-03-19 04:54:04.251155 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-19 04:54:04.251167 | orchestrator | 2026-03-19 04:54:04.251179 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-19 04:54:47.015678 | orchestrator | Thursday 19 March 2026 04:54:04 +0000 (0:00:00.199) 0:17:57.499 ******** 2026-03-19 04:54:47.015849 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 04:54:47.015866 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-19 04:54:47.015879 | orchestrator | 2026-03-19 04:54:47.015891 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-19 04:54:47.015903 | orchestrator | Thursday 19 March 2026 04:54:05 +0000 (0:00:00.853) 0:17:58.352 ******** 2026-03-19 04:54:47.015914 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:54:47.015942 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 04:54:47.015954 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 04:54:47.015966 | orchestrator | 2026-03-19 04:54:47.015978 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:54:47.015989 | orchestrator | Thursday 19 March 2026 04:54:07 +0000 (0:00:02.328) 0:18:00.681 ******** 2026-03-19 04:54:47.016000 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-19 04:54:47.016011 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 04:54:47.016022 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016034 | orchestrator | 2026-03-19 04:54:47.016045 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-19 04:54:47.016056 | orchestrator | Thursday 19 March 2026 04:54:08 +0000 (0:00:00.984) 0:18:01.666 ******** 2026-03-19 04:54:47.016067 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016078 | orchestrator | 2026-03-19 04:54:47.016089 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-19 04:54:47.016100 | orchestrator | Thursday 19 March 2026 04:54:08 +0000 (0:00:00.222) 0:18:01.888 ******** 2026-03-19 04:54:47.016111 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016122 | orchestrator | 2026-03-19 04:54:47.016133 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-19 04:54:47.016144 | orchestrator | Thursday 19 March 2026 04:54:08 +0000 (0:00:00.127) 0:18:02.015 ******** 2026-03-19 04:54:47.016155 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016166 | orchestrator | 2026-03-19 04:54:47.016177 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-19 04:54:47.016188 | orchestrator | Thursday 19 March 2026 04:54:09 +0000 (0:00:00.391) 0:18:02.407 ******** 2026-03-19 04:54:47.016200 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-19 04:54:47.016213 | orchestrator | 2026-03-19 04:54:47.016227 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-19 04:54:47.016239 | orchestrator | Thursday 19 March 2026 04:54:09 +0000 (0:00:00.203) 0:18:02.610 ******** 2026-03-19 04:54:47.016253 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016291 | orchestrator | 2026-03-19 04:54:47.016305 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-19 04:54:47.016318 | orchestrator | Thursday 19 March 2026 04:54:09 +0000 (0:00:00.486) 0:18:03.097 ******** 2026-03-19 04:54:47.016330 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016343 | orchestrator | 2026-03-19 04:54:47.016356 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-19 04:54:47.016369 | orchestrator | Thursday 19 March 2026 04:54:12 +0000 (0:00:02.440) 0:18:05.537 ******** 2026-03-19 04:54:47.016381 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-19 04:54:47.016392 | orchestrator | 2026-03-19 04:54:47.016403 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-19 04:54:47.016414 | orchestrator | Thursday 19 March 2026 04:54:12 +0000 (0:00:00.199) 0:18:05.736 ******** 2026-03-19 04:54:47.016425 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016436 | orchestrator | 2026-03-19 04:54:47.016447 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-19 04:54:47.016458 | orchestrator | Thursday 19 March 2026 04:54:13 +0000 (0:00:00.966) 0:18:06.703 ******** 2026-03-19 04:54:47.016469 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016480 | orchestrator | 2026-03-19 04:54:47.016491 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-19 04:54:47.016502 | orchestrator | Thursday 19 March 2026 04:54:14 +0000 (0:00:00.944) 0:18:07.647 ******** 2026-03-19 04:54:47.016513 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:54:47.016524 | orchestrator | 2026-03-19 04:54:47.016535 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-19 04:54:47.016546 | orchestrator | Thursday 19 March 2026 04:54:15 +0000 (0:00:01.300) 0:18:08.947 ******** 2026-03-19 04:54:47.016557 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016568 | orchestrator | 2026-03-19 04:54:47.016579 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-19 04:54:47.016590 | orchestrator | Thursday 19 March 2026 04:54:15 +0000 (0:00:00.154) 0:18:09.102 ******** 2026-03-19 04:54:47.016601 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016612 | orchestrator | 2026-03-19 04:54:47.016623 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-19 04:54:47.016634 | orchestrator | Thursday 19 March 2026 04:54:15 +0000 (0:00:00.131) 0:18:09.233 ******** 2026-03-19 04:54:47.016646 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-19 04:54:47.016657 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-19 04:54:47.016668 | orchestrator | 2026-03-19 04:54:47.016691 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-19 04:54:47.016702 | orchestrator | Thursday 19 March 2026 04:54:16 +0000 (0:00:00.857) 0:18:10.091 ******** 2026-03-19 04:54:47.016793 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-19 04:54:47.016804 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-19 04:54:47.016815 | orchestrator | 2026-03-19 04:54:47.016826 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-19 04:54:47.016836 | orchestrator | Thursday 19 March 2026 04:54:19 +0000 (0:00:02.413) 0:18:12.504 ******** 2026-03-19 04:54:47.016846 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-19 04:54:47.016871 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-19 04:54:47.016882 | orchestrator | 2026-03-19 04:54:47.016892 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-19 04:54:47.016902 | orchestrator | Thursday 19 March 2026 04:54:22 +0000 (0:00:03.582) 0:18:16.087 ******** 2026-03-19 04:54:47.016912 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016921 | orchestrator | 2026-03-19 04:54:47.016931 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-19 04:54:47.016941 | orchestrator | Thursday 19 March 2026 04:54:23 +0000 (0:00:00.228) 0:18:16.315 ******** 2026-03-19 04:54:47.016951 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.016969 | orchestrator | 2026-03-19 04:54:47.016984 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-19 04:54:47.016994 | orchestrator | Thursday 19 March 2026 04:54:23 +0000 (0:00:00.220) 0:18:16.535 ******** 2026-03-19 04:54:47.017004 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.017014 | orchestrator | 2026-03-19 04:54:47.017023 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-19 04:54:47.017033 | orchestrator | Thursday 19 March 2026 04:54:23 +0000 (0:00:00.309) 0:18:16.845 ******** 2026-03-19 04:54:47.017043 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.017053 | orchestrator | 2026-03-19 04:54:47.017063 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-19 04:54:47.017073 | orchestrator | Thursday 19 March 2026 04:54:23 +0000 (0:00:00.140) 0:18:16.986 ******** 2026-03-19 04:54:47.017082 | orchestrator | skipping: [testbed-node-4] 2026-03-19 04:54:47.017092 | orchestrator | 2026-03-19 04:54:47.017102 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-19 04:54:47.017112 | orchestrator | Thursday 19 March 2026 04:54:23 +0000 (0:00:00.115) 0:18:17.101 ******** 2026-03-19 04:54:47.017122 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-19 04:54:47.017133 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-19 04:54:47.017142 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-19 04:54:47.017152 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-19 04:54:47.017162 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-19 04:54:47.017172 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-03-19 04:54:47.017182 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:54:47.017191 | orchestrator | 2026-03-19 04:54:47.017201 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-19 04:54:47.017211 | orchestrator | 2026-03-19 04:54:47.017221 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:54:47.017231 | orchestrator | Thursday 19 March 2026 04:54:44 +0000 (0:00:20.160) 0:18:37.262 ******** 2026-03-19 04:54:47.017240 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-19 04:54:47.017250 | orchestrator | 2026-03-19 04:54:47.017260 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:54:47.017284 | orchestrator | Thursday 19 March 2026 04:54:44 +0000 (0:00:00.256) 0:18:37.518 ******** 2026-03-19 04:54:47.017305 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017315 | orchestrator | 2026-03-19 04:54:47.017324 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:54:47.017334 | orchestrator | Thursday 19 March 2026 04:54:44 +0000 (0:00:00.477) 0:18:37.996 ******** 2026-03-19 04:54:47.017344 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017354 | orchestrator | 2026-03-19 04:54:47.017363 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:54:47.017373 | orchestrator | Thursday 19 March 2026 04:54:45 +0000 (0:00:00.440) 0:18:38.437 ******** 2026-03-19 04:54:47.017383 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017392 | orchestrator | 2026-03-19 04:54:47.017402 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:54:47.017411 | orchestrator | Thursday 19 March 2026 04:54:45 +0000 (0:00:00.456) 0:18:38.893 ******** 2026-03-19 04:54:47.017421 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017431 | orchestrator | 2026-03-19 04:54:47.017440 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:54:47.017450 | orchestrator | Thursday 19 March 2026 04:54:45 +0000 (0:00:00.154) 0:18:39.048 ******** 2026-03-19 04:54:47.017466 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017475 | orchestrator | 2026-03-19 04:54:47.017485 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:54:47.017495 | orchestrator | Thursday 19 March 2026 04:54:45 +0000 (0:00:00.133) 0:18:39.181 ******** 2026-03-19 04:54:47.017504 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017514 | orchestrator | 2026-03-19 04:54:47.017523 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:54:47.017533 | orchestrator | Thursday 19 March 2026 04:54:46 +0000 (0:00:00.154) 0:18:39.336 ******** 2026-03-19 04:54:47.017543 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:47.017552 | orchestrator | 2026-03-19 04:54:47.017562 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:54:47.017571 | orchestrator | Thursday 19 March 2026 04:54:46 +0000 (0:00:00.131) 0:18:39.468 ******** 2026-03-19 04:54:47.017581 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:47.017591 | orchestrator | 2026-03-19 04:54:47.017600 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:54:47.017610 | orchestrator | Thursday 19 March 2026 04:54:46 +0000 (0:00:00.140) 0:18:39.608 ******** 2026-03-19 04:54:47.017620 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:54:47.017636 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:54:54.030187 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:54:54.030309 | orchestrator | 2026-03-19 04:54:54.030332 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:54:54.030349 | orchestrator | Thursday 19 March 2026 04:54:46 +0000 (0:00:00.651) 0:18:40.259 ******** 2026-03-19 04:54:54.030365 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.030381 | orchestrator | 2026-03-19 04:54:54.030396 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:54:54.030427 | orchestrator | Thursday 19 March 2026 04:54:47 +0000 (0:00:00.260) 0:18:40.520 ******** 2026-03-19 04:54:54.030443 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:54:54.030459 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:54:54.030473 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:54:54.030488 | orchestrator | 2026-03-19 04:54:54.030503 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:54:54.030517 | orchestrator | Thursday 19 March 2026 04:54:49 +0000 (0:00:02.155) 0:18:42.676 ******** 2026-03-19 04:54:54.030533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 04:54:54.030548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 04:54:54.030563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 04:54:54.030578 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.030592 | orchestrator | 2026-03-19 04:54:54.030607 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:54:54.030622 | orchestrator | Thursday 19 March 2026 04:54:49 +0000 (0:00:00.406) 0:18:43.083 ******** 2026-03-19 04:54:54.030638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030797 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.030812 | orchestrator | 2026-03-19 04:54:54.030828 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:54:54.030843 | orchestrator | Thursday 19 March 2026 04:54:50 +0000 (0:00:00.903) 0:18:43.986 ******** 2026-03-19 04:54:54.030861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030896 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:54.030911 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.030925 | orchestrator | 2026-03-19 04:54:54.030941 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:54:54.030956 | orchestrator | Thursday 19 March 2026 04:54:50 +0000 (0:00:00.158) 0:18:44.145 ******** 2026-03-19 04:54:54.030997 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:54:47.817117', 'end': '2026-03-19 04:54:47.860654', 'delta': '0:00:00.043537', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:54:54.031027 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:54:48.362428', 'end': '2026-03-19 04:54:48.413089', 'delta': '0:00:00.050661', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:54:54.031045 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:54:49.225827', 'end': '2026-03-19 04:54:49.269625', 'delta': '0:00:00.043798', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:54:54.031074 | orchestrator | 2026-03-19 04:54:54.031089 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:54:54.031104 | orchestrator | Thursday 19 March 2026 04:54:51 +0000 (0:00:00.455) 0:18:44.600 ******** 2026-03-19 04:54:54.031118 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.031133 | orchestrator | 2026-03-19 04:54:54.031148 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:54:54.031163 | orchestrator | Thursday 19 March 2026 04:54:51 +0000 (0:00:00.261) 0:18:44.862 ******** 2026-03-19 04:54:54.031178 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031192 | orchestrator | 2026-03-19 04:54:54.031207 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:54:54.031217 | orchestrator | Thursday 19 March 2026 04:54:51 +0000 (0:00:00.257) 0:18:45.119 ******** 2026-03-19 04:54:54.031225 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.031234 | orchestrator | 2026-03-19 04:54:54.031242 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:54:54.031251 | orchestrator | Thursday 19 March 2026 04:54:51 +0000 (0:00:00.139) 0:18:45.259 ******** 2026-03-19 04:54:54.031259 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:54:54.031268 | orchestrator | 2026-03-19 04:54:54.031276 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:54:54.031285 | orchestrator | Thursday 19 March 2026 04:54:52 +0000 (0:00:00.994) 0:18:46.253 ******** 2026-03-19 04:54:54.031293 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.031302 | orchestrator | 2026-03-19 04:54:54.031310 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:54:54.031319 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.154) 0:18:46.407 ******** 2026-03-19 04:54:54.031327 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031336 | orchestrator | 2026-03-19 04:54:54.031344 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:54:54.031353 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.116) 0:18:46.524 ******** 2026-03-19 04:54:54.031366 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031380 | orchestrator | 2026-03-19 04:54:54.031393 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:54:54.031405 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.226) 0:18:46.750 ******** 2026-03-19 04:54:54.031419 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031433 | orchestrator | 2026-03-19 04:54:54.031447 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:54:54.031462 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.127) 0:18:46.878 ******** 2026-03-19 04:54:54.031475 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031491 | orchestrator | 2026-03-19 04:54:54.031508 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:54:54.031523 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.111) 0:18:46.989 ******** 2026-03-19 04:54:54.031538 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.031552 | orchestrator | 2026-03-19 04:54:54.031567 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:54:54.031576 | orchestrator | Thursday 19 March 2026 04:54:53 +0000 (0:00:00.177) 0:18:47.166 ******** 2026-03-19 04:54:54.031585 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.031593 | orchestrator | 2026-03-19 04:54:54.031612 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:54:54.935993 | orchestrator | Thursday 19 March 2026 04:54:54 +0000 (0:00:00.119) 0:18:47.286 ******** 2026-03-19 04:54:54.936095 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.936104 | orchestrator | 2026-03-19 04:54:54.936110 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:54:54.936115 | orchestrator | Thursday 19 March 2026 04:54:54 +0000 (0:00:00.155) 0:18:47.441 ******** 2026-03-19 04:54:54.936121 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:54.936126 | orchestrator | 2026-03-19 04:54:54.936142 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:54:54.936149 | orchestrator | Thursday 19 March 2026 04:54:54 +0000 (0:00:00.374) 0:18:47.816 ******** 2026-03-19 04:54:54.936154 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:54.936158 | orchestrator | 2026-03-19 04:54:54.936163 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:54:54.936168 | orchestrator | Thursday 19 March 2026 04:54:54 +0000 (0:00:00.165) 0:18:47.982 ******** 2026-03-19 04:54:54.936174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}})  2026-03-19 04:54:54.936191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:54:54.936198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}})  2026-03-19 04:54:54.936204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:54:54.936238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}})  2026-03-19 04:54:54.936262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}})  2026-03-19 04:54:54.936276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:54.936304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:54:55.287807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:55.287881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:54:55.287896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:54:55.287919 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:55.287931 | orchestrator | 2026-03-19 04:54:55.287936 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:54:55.287941 | orchestrator | Thursday 19 March 2026 04:54:55 +0000 (0:00:00.330) 0:18:48.312 ******** 2026-03-19 04:54:55.287946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.287961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.287966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.287980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.287986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.287995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.288001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.288006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:55.288013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615291 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615391 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:54:56.615405 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:54:56.615413 | orchestrator | 2026-03-19 04:54:56.615420 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:54:56.615427 | orchestrator | Thursday 19 March 2026 04:54:55 +0000 (0:00:00.412) 0:18:48.724 ******** 2026-03-19 04:54:56.615434 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:56.615441 | orchestrator | 2026-03-19 04:54:56.615447 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:54:56.615453 | orchestrator | Thursday 19 March 2026 04:54:56 +0000 (0:00:00.540) 0:18:49.265 ******** 2026-03-19 04:54:56.615459 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:56.615466 | orchestrator | 2026-03-19 04:54:56.615472 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:54:56.615478 | orchestrator | Thursday 19 March 2026 04:54:56 +0000 (0:00:00.131) 0:18:49.396 ******** 2026-03-19 04:54:56.615484 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:54:56.615490 | orchestrator | 2026-03-19 04:54:56.615496 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:54:56.615505 | orchestrator | Thursday 19 March 2026 04:54:56 +0000 (0:00:00.475) 0:18:49.872 ******** 2026-03-19 04:55:11.187574 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.187697 | orchestrator | 2026-03-19 04:55:11.187811 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:55:11.187836 | orchestrator | Thursday 19 March 2026 04:54:56 +0000 (0:00:00.141) 0:18:50.013 ******** 2026-03-19 04:55:11.187856 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.187876 | orchestrator | 2026-03-19 04:55:11.187898 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:55:11.187917 | orchestrator | Thursday 19 March 2026 04:54:57 +0000 (0:00:00.261) 0:18:50.275 ******** 2026-03-19 04:55:11.187936 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.187949 | orchestrator | 2026-03-19 04:55:11.187959 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:55:11.187971 | orchestrator | Thursday 19 March 2026 04:54:57 +0000 (0:00:00.134) 0:18:50.409 ******** 2026-03-19 04:55:11.187982 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 04:55:11.187993 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 04:55:11.188004 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 04:55:11.188015 | orchestrator | 2026-03-19 04:55:11.188026 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:55:11.188036 | orchestrator | Thursday 19 March 2026 04:54:58 +0000 (0:00:00.925) 0:18:51.335 ******** 2026-03-19 04:55:11.188047 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 04:55:11.188058 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 04:55:11.188075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 04:55:11.188094 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188112 | orchestrator | 2026-03-19 04:55:11.188131 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:55:11.188149 | orchestrator | Thursday 19 March 2026 04:54:58 +0000 (0:00:00.166) 0:18:51.502 ******** 2026-03-19 04:55:11.188167 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-19 04:55:11.188183 | orchestrator | 2026-03-19 04:55:11.188203 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:55:11.188224 | orchestrator | Thursday 19 March 2026 04:54:58 +0000 (0:00:00.480) 0:18:51.982 ******** 2026-03-19 04:55:11.188243 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188262 | orchestrator | 2026-03-19 04:55:11.188282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:55:11.188300 | orchestrator | Thursday 19 March 2026 04:54:58 +0000 (0:00:00.147) 0:18:52.130 ******** 2026-03-19 04:55:11.188319 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188331 | orchestrator | 2026-03-19 04:55:11.188341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:55:11.188352 | orchestrator | Thursday 19 March 2026 04:54:59 +0000 (0:00:00.155) 0:18:52.286 ******** 2026-03-19 04:55:11.188363 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188373 | orchestrator | 2026-03-19 04:55:11.188385 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:55:11.188404 | orchestrator | Thursday 19 March 2026 04:54:59 +0000 (0:00:00.145) 0:18:52.431 ******** 2026-03-19 04:55:11.188421 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.188441 | orchestrator | 2026-03-19 04:55:11.188477 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:55:11.188496 | orchestrator | Thursday 19 March 2026 04:54:59 +0000 (0:00:00.279) 0:18:52.710 ******** 2026-03-19 04:55:11.188514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:55:11.188529 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:55:11.188550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:55:11.188576 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188593 | orchestrator | 2026-03-19 04:55:11.188611 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:55:11.188643 | orchestrator | Thursday 19 March 2026 04:54:59 +0000 (0:00:00.407) 0:18:53.118 ******** 2026-03-19 04:55:11.188660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:55:11.188675 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:55:11.188690 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:55:11.188736 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188753 | orchestrator | 2026-03-19 04:55:11.188772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:55:11.188790 | orchestrator | Thursday 19 March 2026 04:55:00 +0000 (0:00:00.410) 0:18:53.529 ******** 2026-03-19 04:55:11.188806 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:55:11.188824 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:55:11.188842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:55:11.188860 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.188877 | orchestrator | 2026-03-19 04:55:11.188893 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:55:11.188911 | orchestrator | Thursday 19 March 2026 04:55:00 +0000 (0:00:00.391) 0:18:53.921 ******** 2026-03-19 04:55:11.188930 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.188949 | orchestrator | 2026-03-19 04:55:11.188965 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:55:11.188983 | orchestrator | Thursday 19 March 2026 04:55:00 +0000 (0:00:00.148) 0:18:54.069 ******** 2026-03-19 04:55:11.189001 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 04:55:11.189018 | orchestrator | 2026-03-19 04:55:11.189034 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:55:11.189051 | orchestrator | Thursday 19 March 2026 04:55:01 +0000 (0:00:00.341) 0:18:54.411 ******** 2026-03-19 04:55:11.189092 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:55:11.189111 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:55:11.189128 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:55:11.189145 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:55:11.189164 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:55:11.189181 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 04:55:11.189198 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:55:11.189216 | orchestrator | 2026-03-19 04:55:11.189233 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:55:11.189250 | orchestrator | Thursday 19 March 2026 04:55:02 +0000 (0:00:01.118) 0:18:55.529 ******** 2026-03-19 04:55:11.189268 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:55:11.189285 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:55:11.189303 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:55:11.189321 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:55:11.189339 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:55:11.189356 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 04:55:11.189374 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:55:11.189392 | orchestrator | 2026-03-19 04:55:11.189410 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-19 04:55:11.189428 | orchestrator | Thursday 19 March 2026 04:55:03 +0000 (0:00:01.600) 0:18:57.130 ******** 2026-03-19 04:55:11.189446 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.189481 | orchestrator | 2026-03-19 04:55:11.189499 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-19 04:55:11.189516 | orchestrator | Thursday 19 March 2026 04:55:04 +0000 (0:00:00.738) 0:18:57.868 ******** 2026-03-19 04:55:11.189533 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.189551 | orchestrator | 2026-03-19 04:55:11.189568 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-19 04:55:11.189585 | orchestrator | Thursday 19 March 2026 04:55:04 +0000 (0:00:00.140) 0:18:58.009 ******** 2026-03-19 04:55:11.189603 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.189621 | orchestrator | 2026-03-19 04:55:11.189639 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-19 04:55:11.189658 | orchestrator | Thursday 19 March 2026 04:55:04 +0000 (0:00:00.252) 0:18:58.261 ******** 2026-03-19 04:55:11.189676 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 04:55:11.189695 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-19 04:55:11.189741 | orchestrator | 2026-03-19 04:55:11.189760 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:55:11.189778 | orchestrator | Thursday 19 March 2026 04:55:08 +0000 (0:00:03.533) 0:19:01.795 ******** 2026-03-19 04:55:11.189811 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-19 04:55:11.189829 | orchestrator | 2026-03-19 04:55:11.189844 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:55:11.189855 | orchestrator | Thursday 19 March 2026 04:55:08 +0000 (0:00:00.213) 0:19:02.009 ******** 2026-03-19 04:55:11.189865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-19 04:55:11.189877 | orchestrator | 2026-03-19 04:55:11.189887 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:55:11.189898 | orchestrator | Thursday 19 March 2026 04:55:08 +0000 (0:00:00.191) 0:19:02.201 ******** 2026-03-19 04:55:11.189909 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.189919 | orchestrator | 2026-03-19 04:55:11.189930 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:55:11.189941 | orchestrator | Thursday 19 March 2026 04:55:09 +0000 (0:00:00.124) 0:19:02.326 ******** 2026-03-19 04:55:11.189952 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.189962 | orchestrator | 2026-03-19 04:55:11.189973 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:55:11.189984 | orchestrator | Thursday 19 March 2026 04:55:09 +0000 (0:00:00.477) 0:19:02.803 ******** 2026-03-19 04:55:11.189995 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.190006 | orchestrator | 2026-03-19 04:55:11.190091 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:55:11.190118 | orchestrator | Thursday 19 March 2026 04:55:10 +0000 (0:00:00.503) 0:19:03.307 ******** 2026-03-19 04:55:11.190137 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:11.190157 | orchestrator | 2026-03-19 04:55:11.190176 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:55:11.190194 | orchestrator | Thursday 19 March 2026 04:55:10 +0000 (0:00:00.502) 0:19:03.810 ******** 2026-03-19 04:55:11.190211 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.190223 | orchestrator | 2026-03-19 04:55:11.190233 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:55:11.190244 | orchestrator | Thursday 19 March 2026 04:55:10 +0000 (0:00:00.127) 0:19:03.937 ******** 2026-03-19 04:55:11.190254 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.190265 | orchestrator | 2026-03-19 04:55:11.190275 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:55:11.190284 | orchestrator | Thursday 19 March 2026 04:55:11 +0000 (0:00:00.374) 0:19:04.312 ******** 2026-03-19 04:55:11.190294 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:11.190303 | orchestrator | 2026-03-19 04:55:11.190326 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:55:21.951679 | orchestrator | Thursday 19 March 2026 04:55:11 +0000 (0:00:00.126) 0:19:04.438 ******** 2026-03-19 04:55:21.951848 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.951867 | orchestrator | 2026-03-19 04:55:21.951879 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:55:21.951889 | orchestrator | Thursday 19 March 2026 04:55:11 +0000 (0:00:00.533) 0:19:04.972 ******** 2026-03-19 04:55:21.951898 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.951908 | orchestrator | 2026-03-19 04:55:21.951917 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:55:21.951927 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.532) 0:19:05.504 ******** 2026-03-19 04:55:21.951937 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.951947 | orchestrator | 2026-03-19 04:55:21.951957 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:55:21.951967 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.123) 0:19:05.628 ******** 2026-03-19 04:55:21.951977 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.951986 | orchestrator | 2026-03-19 04:55:21.951995 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:55:21.952004 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.119) 0:19:05.747 ******** 2026-03-19 04:55:21.952013 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952023 | orchestrator | 2026-03-19 04:55:21.952032 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:55:21.952041 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.162) 0:19:05.909 ******** 2026-03-19 04:55:21.952051 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952060 | orchestrator | 2026-03-19 04:55:21.952069 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:55:21.952079 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.141) 0:19:06.051 ******** 2026-03-19 04:55:21.952088 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952097 | orchestrator | 2026-03-19 04:55:21.952107 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:55:21.952116 | orchestrator | Thursday 19 March 2026 04:55:12 +0000 (0:00:00.148) 0:19:06.200 ******** 2026-03-19 04:55:21.952125 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952134 | orchestrator | 2026-03-19 04:55:21.952143 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:55:21.952153 | orchestrator | Thursday 19 March 2026 04:55:13 +0000 (0:00:00.133) 0:19:06.333 ******** 2026-03-19 04:55:21.952163 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952172 | orchestrator | 2026-03-19 04:55:21.952182 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:55:21.952191 | orchestrator | Thursday 19 March 2026 04:55:13 +0000 (0:00:00.123) 0:19:06.457 ******** 2026-03-19 04:55:21.952201 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952210 | orchestrator | 2026-03-19 04:55:21.952220 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:55:21.952231 | orchestrator | Thursday 19 March 2026 04:55:13 +0000 (0:00:00.133) 0:19:06.590 ******** 2026-03-19 04:55:21.952244 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952256 | orchestrator | 2026-03-19 04:55:21.952268 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:55:21.952280 | orchestrator | Thursday 19 March 2026 04:55:13 +0000 (0:00:00.145) 0:19:06.736 ******** 2026-03-19 04:55:21.952308 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952318 | orchestrator | 2026-03-19 04:55:21.952330 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:55:21.952341 | orchestrator | Thursday 19 March 2026 04:55:13 +0000 (0:00:00.485) 0:19:07.222 ******** 2026-03-19 04:55:21.952352 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952363 | orchestrator | 2026-03-19 04:55:21.952372 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:55:21.952407 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.133) 0:19:07.355 ******** 2026-03-19 04:55:21.952417 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952427 | orchestrator | 2026-03-19 04:55:21.952436 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:55:21.952447 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.129) 0:19:07.484 ******** 2026-03-19 04:55:21.952457 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952467 | orchestrator | 2026-03-19 04:55:21.952477 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:55:21.952488 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.137) 0:19:07.622 ******** 2026-03-19 04:55:21.952499 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952508 | orchestrator | 2026-03-19 04:55:21.952517 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:55:21.952527 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.159) 0:19:07.782 ******** 2026-03-19 04:55:21.952537 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952548 | orchestrator | 2026-03-19 04:55:21.952558 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:55:21.952570 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.123) 0:19:07.905 ******** 2026-03-19 04:55:21.952580 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952589 | orchestrator | 2026-03-19 04:55:21.952601 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:55:21.952610 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.133) 0:19:08.039 ******** 2026-03-19 04:55:21.952619 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952629 | orchestrator | 2026-03-19 04:55:21.952638 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:55:21.952648 | orchestrator | Thursday 19 March 2026 04:55:14 +0000 (0:00:00.143) 0:19:08.183 ******** 2026-03-19 04:55:21.952657 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952665 | orchestrator | 2026-03-19 04:55:21.952674 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:55:21.952683 | orchestrator | Thursday 19 March 2026 04:55:15 +0000 (0:00:00.124) 0:19:08.307 ******** 2026-03-19 04:55:21.952739 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952750 | orchestrator | 2026-03-19 04:55:21.952759 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:55:21.952769 | orchestrator | Thursday 19 March 2026 04:55:15 +0000 (0:00:00.130) 0:19:08.437 ******** 2026-03-19 04:55:21.952779 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952789 | orchestrator | 2026-03-19 04:55:21.952799 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:55:21.952809 | orchestrator | Thursday 19 March 2026 04:55:15 +0000 (0:00:00.117) 0:19:08.555 ******** 2026-03-19 04:55:21.952819 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952829 | orchestrator | 2026-03-19 04:55:21.952838 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:55:21.952848 | orchestrator | Thursday 19 March 2026 04:55:15 +0000 (0:00:00.116) 0:19:08.672 ******** 2026-03-19 04:55:21.952857 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.952867 | orchestrator | 2026-03-19 04:55:21.952876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:55:21.952886 | orchestrator | Thursday 19 March 2026 04:55:15 +0000 (0:00:00.434) 0:19:09.106 ******** 2026-03-19 04:55:21.952896 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952905 | orchestrator | 2026-03-19 04:55:21.952915 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:55:21.952925 | orchestrator | Thursday 19 March 2026 04:55:16 +0000 (0:00:00.956) 0:19:10.062 ******** 2026-03-19 04:55:21.952934 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.952942 | orchestrator | 2026-03-19 04:55:21.952951 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:55:21.952972 | orchestrator | Thursday 19 March 2026 04:55:18 +0000 (0:00:01.229) 0:19:11.292 ******** 2026-03-19 04:55:21.952981 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-19 04:55:21.952991 | orchestrator | 2026-03-19 04:55:21.952999 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:55:21.953007 | orchestrator | Thursday 19 March 2026 04:55:18 +0000 (0:00:00.205) 0:19:11.498 ******** 2026-03-19 04:55:21.953016 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953024 | orchestrator | 2026-03-19 04:55:21.953032 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:55:21.953041 | orchestrator | Thursday 19 March 2026 04:55:18 +0000 (0:00:00.143) 0:19:11.641 ******** 2026-03-19 04:55:21.953050 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953058 | orchestrator | 2026-03-19 04:55:21.953067 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:55:21.953076 | orchestrator | Thursday 19 March 2026 04:55:18 +0000 (0:00:00.143) 0:19:11.784 ******** 2026-03-19 04:55:21.953085 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:55:21.953094 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:55:21.953103 | orchestrator | 2026-03-19 04:55:21.953111 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:55:21.953119 | orchestrator | Thursday 19 March 2026 04:55:19 +0000 (0:00:00.850) 0:19:12.635 ******** 2026-03-19 04:55:21.953127 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.953136 | orchestrator | 2026-03-19 04:55:21.953152 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:55:21.953162 | orchestrator | Thursday 19 March 2026 04:55:19 +0000 (0:00:00.474) 0:19:13.109 ******** 2026-03-19 04:55:21.953170 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953179 | orchestrator | 2026-03-19 04:55:21.953186 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:55:21.953195 | orchestrator | Thursday 19 March 2026 04:55:19 +0000 (0:00:00.137) 0:19:13.247 ******** 2026-03-19 04:55:21.953203 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953211 | orchestrator | 2026-03-19 04:55:21.953219 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:55:21.953227 | orchestrator | Thursday 19 March 2026 04:55:20 +0000 (0:00:00.152) 0:19:13.400 ******** 2026-03-19 04:55:21.953235 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953244 | orchestrator | 2026-03-19 04:55:21.953253 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:55:21.953261 | orchestrator | Thursday 19 March 2026 04:55:20 +0000 (0:00:00.151) 0:19:13.551 ******** 2026-03-19 04:55:21.953269 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-19 04:55:21.953277 | orchestrator | 2026-03-19 04:55:21.953284 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:55:21.953292 | orchestrator | Thursday 19 March 2026 04:55:20 +0000 (0:00:00.464) 0:19:14.016 ******** 2026-03-19 04:55:21.953300 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:21.953308 | orchestrator | 2026-03-19 04:55:21.953316 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:55:21.953325 | orchestrator | Thursday 19 March 2026 04:55:21 +0000 (0:00:00.735) 0:19:14.751 ******** 2026-03-19 04:55:21.953333 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:55:21.953341 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:55:21.953349 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:55:21.953358 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953365 | orchestrator | 2026-03-19 04:55:21.953381 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:55:21.953389 | orchestrator | Thursday 19 March 2026 04:55:21 +0000 (0:00:00.148) 0:19:14.900 ******** 2026-03-19 04:55:21.953396 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:21.953404 | orchestrator | 2026-03-19 04:55:21.953412 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:55:21.953420 | orchestrator | Thursday 19 March 2026 04:55:21 +0000 (0:00:00.136) 0:19:15.036 ******** 2026-03-19 04:55:21.953440 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665308 | orchestrator | 2026-03-19 04:55:39.665455 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:55:39.665473 | orchestrator | Thursday 19 March 2026 04:55:21 +0000 (0:00:00.169) 0:19:15.205 ******** 2026-03-19 04:55:39.665486 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665498 | orchestrator | 2026-03-19 04:55:39.665509 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:55:39.665520 | orchestrator | Thursday 19 March 2026 04:55:22 +0000 (0:00:00.154) 0:19:15.360 ******** 2026-03-19 04:55:39.665531 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665542 | orchestrator | 2026-03-19 04:55:39.665553 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:55:39.665564 | orchestrator | Thursday 19 March 2026 04:55:22 +0000 (0:00:00.157) 0:19:15.517 ******** 2026-03-19 04:55:39.665574 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665585 | orchestrator | 2026-03-19 04:55:39.665596 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:55:39.665606 | orchestrator | Thursday 19 March 2026 04:55:22 +0000 (0:00:00.147) 0:19:15.664 ******** 2026-03-19 04:55:39.665617 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:39.665629 | orchestrator | 2026-03-19 04:55:39.665639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:55:39.665651 | orchestrator | Thursday 19 March 2026 04:55:23 +0000 (0:00:01.553) 0:19:17.217 ******** 2026-03-19 04:55:39.665661 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:39.665672 | orchestrator | 2026-03-19 04:55:39.665683 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:55:39.665694 | orchestrator | Thursday 19 March 2026 04:55:24 +0000 (0:00:00.148) 0:19:17.366 ******** 2026-03-19 04:55:39.665809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-19 04:55:39.665825 | orchestrator | 2026-03-19 04:55:39.665837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:55:39.665850 | orchestrator | Thursday 19 March 2026 04:55:24 +0000 (0:00:00.220) 0:19:17.587 ******** 2026-03-19 04:55:39.665862 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665875 | orchestrator | 2026-03-19 04:55:39.665887 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:55:39.665899 | orchestrator | Thursday 19 March 2026 04:55:24 +0000 (0:00:00.142) 0:19:17.729 ******** 2026-03-19 04:55:39.665912 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665924 | orchestrator | 2026-03-19 04:55:39.665937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:55:39.665949 | orchestrator | Thursday 19 March 2026 04:55:24 +0000 (0:00:00.401) 0:19:18.130 ******** 2026-03-19 04:55:39.665962 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.665975 | orchestrator | 2026-03-19 04:55:39.665987 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:55:39.665999 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.139) 0:19:18.270 ******** 2026-03-19 04:55:39.666087 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666114 | orchestrator | 2026-03-19 04:55:39.666131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:55:39.666149 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.163) 0:19:18.433 ******** 2026-03-19 04:55:39.666188 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666236 | orchestrator | 2026-03-19 04:55:39.666254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:55:39.666272 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.163) 0:19:18.597 ******** 2026-03-19 04:55:39.666289 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666306 | orchestrator | 2026-03-19 04:55:39.666324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:55:39.666340 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.142) 0:19:18.740 ******** 2026-03-19 04:55:39.666354 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666371 | orchestrator | 2026-03-19 04:55:39.666387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:55:39.666402 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.147) 0:19:18.888 ******** 2026-03-19 04:55:39.666412 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666421 | orchestrator | 2026-03-19 04:55:39.666431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:55:39.666440 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.144) 0:19:19.032 ******** 2026-03-19 04:55:39.666449 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:39.666459 | orchestrator | 2026-03-19 04:55:39.666468 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:55:39.666478 | orchestrator | Thursday 19 March 2026 04:55:25 +0000 (0:00:00.214) 0:19:19.246 ******** 2026-03-19 04:55:39.666487 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-19 04:55:39.666498 | orchestrator | 2026-03-19 04:55:39.666507 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:55:39.666517 | orchestrator | Thursday 19 March 2026 04:55:26 +0000 (0:00:00.204) 0:19:19.451 ******** 2026-03-19 04:55:39.666526 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-19 04:55:39.666536 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-19 04:55:39.666546 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-19 04:55:39.666555 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-19 04:55:39.666564 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-19 04:55:39.666574 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-19 04:55:39.666583 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-19 04:55:39.666592 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:55:39.666603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:55:39.666632 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:55:39.666642 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:55:39.666652 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:55:39.666661 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:55:39.666671 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:55:39.666680 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-19 04:55:39.666690 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-19 04:55:39.666699 | orchestrator | 2026-03-19 04:55:39.666734 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:55:39.666743 | orchestrator | Thursday 19 March 2026 04:55:31 +0000 (0:00:05.717) 0:19:25.168 ******** 2026-03-19 04:55:39.666753 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-19 04:55:39.666762 | orchestrator | 2026-03-19 04:55:39.666772 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 04:55:39.666781 | orchestrator | Thursday 19 March 2026 04:55:32 +0000 (0:00:00.188) 0:19:25.357 ******** 2026-03-19 04:55:39.666791 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 04:55:39.666811 | orchestrator | 2026-03-19 04:55:39.666820 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 04:55:39.666830 | orchestrator | Thursday 19 March 2026 04:55:32 +0000 (0:00:00.806) 0:19:26.164 ******** 2026-03-19 04:55:39.666839 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 04:55:39.666849 | orchestrator | 2026-03-19 04:55:39.666859 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:55:39.666868 | orchestrator | Thursday 19 March 2026 04:55:33 +0000 (0:00:00.965) 0:19:27.129 ******** 2026-03-19 04:55:39.666878 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666887 | orchestrator | 2026-03-19 04:55:39.666897 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:55:39.666907 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.143) 0:19:27.273 ******** 2026-03-19 04:55:39.666916 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666926 | orchestrator | 2026-03-19 04:55:39.666935 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:55:39.666945 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.132) 0:19:27.405 ******** 2026-03-19 04:55:39.666954 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.666964 | orchestrator | 2026-03-19 04:55:39.666973 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:55:39.666983 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.127) 0:19:27.533 ******** 2026-03-19 04:55:39.666992 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667002 | orchestrator | 2026-03-19 04:55:39.667011 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:55:39.667021 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.127) 0:19:27.660 ******** 2026-03-19 04:55:39.667038 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667048 | orchestrator | 2026-03-19 04:55:39.667058 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:55:39.667067 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.131) 0:19:27.791 ******** 2026-03-19 04:55:39.667077 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667087 | orchestrator | 2026-03-19 04:55:39.667097 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:55:39.667106 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.131) 0:19:27.922 ******** 2026-03-19 04:55:39.667116 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667125 | orchestrator | 2026-03-19 04:55:39.667135 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:55:39.667145 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.144) 0:19:28.067 ******** 2026-03-19 04:55:39.667154 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667164 | orchestrator | 2026-03-19 04:55:39.667174 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:55:39.667183 | orchestrator | Thursday 19 March 2026 04:55:34 +0000 (0:00:00.129) 0:19:28.197 ******** 2026-03-19 04:55:39.667193 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667203 | orchestrator | 2026-03-19 04:55:39.667212 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 04:55:39.667222 | orchestrator | Thursday 19 March 2026 04:55:35 +0000 (0:00:00.129) 0:19:28.327 ******** 2026-03-19 04:55:39.667231 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:55:39.667241 | orchestrator | 2026-03-19 04:55:39.667251 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 04:55:39.667260 | orchestrator | Thursday 19 March 2026 04:55:35 +0000 (0:00:00.110) 0:19:28.438 ******** 2026-03-19 04:55:39.667270 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:55:39.667286 | orchestrator | 2026-03-19 04:55:39.667295 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 04:55:39.667305 | orchestrator | Thursday 19 March 2026 04:55:35 +0000 (0:00:00.220) 0:19:28.658 ******** 2026-03-19 04:55:39.667314 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-19 04:55:39.667324 | orchestrator | 2026-03-19 04:55:39.667334 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 04:55:39.667343 | orchestrator | Thursday 19 March 2026 04:55:39 +0000 (0:00:04.088) 0:19:32.746 ******** 2026-03-19 04:55:39.667359 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 04:56:00.442497 | orchestrator | 2026-03-19 04:56:00.442651 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 04:56:00.442681 | orchestrator | Thursday 19 March 2026 04:55:39 +0000 (0:00:00.173) 0:19:32.920 ******** 2026-03-19 04:56:00.442739 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-19 04:56:00.442765 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-19 04:56:00.442786 | orchestrator | 2026-03-19 04:56:00.442806 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 04:56:00.442825 | orchestrator | Thursday 19 March 2026 04:55:46 +0000 (0:00:06.817) 0:19:39.737 ******** 2026-03-19 04:56:00.442844 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.442863 | orchestrator | 2026-03-19 04:56:00.442882 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 04:56:00.442902 | orchestrator | Thursday 19 March 2026 04:55:46 +0000 (0:00:00.144) 0:19:39.881 ******** 2026-03-19 04:56:00.442923 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.442941 | orchestrator | 2026-03-19 04:56:00.442961 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:56:00.442981 | orchestrator | Thursday 19 March 2026 04:55:46 +0000 (0:00:00.136) 0:19:40.017 ******** 2026-03-19 04:56:00.443001 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443020 | orchestrator | 2026-03-19 04:56:00.443039 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:56:00.443058 | orchestrator | Thursday 19 March 2026 04:55:46 +0000 (0:00:00.161) 0:19:40.179 ******** 2026-03-19 04:56:00.443077 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443096 | orchestrator | 2026-03-19 04:56:00.443116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:56:00.443134 | orchestrator | Thursday 19 March 2026 04:55:47 +0000 (0:00:00.203) 0:19:40.382 ******** 2026-03-19 04:56:00.443153 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443172 | orchestrator | 2026-03-19 04:56:00.443192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:56:00.443213 | orchestrator | Thursday 19 March 2026 04:55:47 +0000 (0:00:00.155) 0:19:40.538 ******** 2026-03-19 04:56:00.443232 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.443252 | orchestrator | 2026-03-19 04:56:00.443271 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:56:00.443312 | orchestrator | Thursday 19 March 2026 04:55:47 +0000 (0:00:00.240) 0:19:40.779 ******** 2026-03-19 04:56:00.443332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:56:00.443354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:56:00.443406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:56:00.443428 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443446 | orchestrator | 2026-03-19 04:56:00.443465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:56:00.443483 | orchestrator | Thursday 19 March 2026 04:55:47 +0000 (0:00:00.452) 0:19:41.231 ******** 2026-03-19 04:56:00.443501 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:56:00.443519 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:56:00.443537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:56:00.443555 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443575 | orchestrator | 2026-03-19 04:56:00.443594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:56:00.443614 | orchestrator | Thursday 19 March 2026 04:55:48 +0000 (0:00:00.410) 0:19:41.641 ******** 2026-03-19 04:56:00.443633 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:56:00.443652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:56:00.443732 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:56:00.443753 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.443772 | orchestrator | 2026-03-19 04:56:00.443791 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:56:00.443809 | orchestrator | Thursday 19 March 2026 04:55:49 +0000 (0:00:00.779) 0:19:42.421 ******** 2026-03-19 04:56:00.443827 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.443846 | orchestrator | 2026-03-19 04:56:00.443865 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:56:00.443885 | orchestrator | Thursday 19 March 2026 04:55:49 +0000 (0:00:00.153) 0:19:42.575 ******** 2026-03-19 04:56:00.443904 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 04:56:00.443922 | orchestrator | 2026-03-19 04:56:00.443939 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 04:56:00.443958 | orchestrator | Thursday 19 March 2026 04:55:50 +0000 (0:00:01.000) 0:19:43.576 ******** 2026-03-19 04:56:00.443977 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.443997 | orchestrator | 2026-03-19 04:56:00.444015 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-19 04:56:00.444034 | orchestrator | Thursday 19 March 2026 04:55:51 +0000 (0:00:00.824) 0:19:44.400 ******** 2026-03-19 04:56:00.444052 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.444071 | orchestrator | 2026-03-19 04:56:00.444120 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-19 04:56:00.444141 | orchestrator | Thursday 19 March 2026 04:55:51 +0000 (0:00:00.155) 0:19:44.556 ******** 2026-03-19 04:56:00.444159 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:56:00.444179 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:56:00.444198 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:56:00.444217 | orchestrator | 2026-03-19 04:56:00.444236 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-19 04:56:00.444254 | orchestrator | Thursday 19 March 2026 04:55:51 +0000 (0:00:00.666) 0:19:45.223 ******** 2026-03-19 04:56:00.444272 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-19 04:56:00.444291 | orchestrator | 2026-03-19 04:56:00.444310 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-19 04:56:00.444329 | orchestrator | Thursday 19 March 2026 04:55:52 +0000 (0:00:00.204) 0:19:45.427 ******** 2026-03-19 04:56:00.444347 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.444365 | orchestrator | 2026-03-19 04:56:00.444385 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-19 04:56:00.444421 | orchestrator | Thursday 19 March 2026 04:55:52 +0000 (0:00:00.131) 0:19:45.559 ******** 2026-03-19 04:56:00.444442 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.444461 | orchestrator | 2026-03-19 04:56:00.444480 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-19 04:56:00.444499 | orchestrator | Thursday 19 March 2026 04:55:52 +0000 (0:00:00.136) 0:19:45.695 ******** 2026-03-19 04:56:00.444518 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.444536 | orchestrator | 2026-03-19 04:56:00.444554 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-19 04:56:00.444572 | orchestrator | Thursday 19 March 2026 04:55:52 +0000 (0:00:00.435) 0:19:46.131 ******** 2026-03-19 04:56:00.444591 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.444609 | orchestrator | 2026-03-19 04:56:00.444628 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-19 04:56:00.444648 | orchestrator | Thursday 19 March 2026 04:55:53 +0000 (0:00:00.207) 0:19:46.339 ******** 2026-03-19 04:56:00.444666 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-19 04:56:00.444684 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-19 04:56:00.444776 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-19 04:56:00.444802 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-19 04:56:00.444821 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-19 04:56:00.444840 | orchestrator | 2026-03-19 04:56:00.444858 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-19 04:56:00.444890 | orchestrator | Thursday 19 March 2026 04:55:55 +0000 (0:00:01.936) 0:19:48.275 ******** 2026-03-19 04:56:00.444909 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.444927 | orchestrator | 2026-03-19 04:56:00.444946 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-19 04:56:00.444965 | orchestrator | Thursday 19 March 2026 04:55:55 +0000 (0:00:00.451) 0:19:48.727 ******** 2026-03-19 04:56:00.444984 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-19 04:56:00.445002 | orchestrator | 2026-03-19 04:56:00.445019 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-19 04:56:00.445037 | orchestrator | Thursday 19 March 2026 04:55:55 +0000 (0:00:00.186) 0:19:48.913 ******** 2026-03-19 04:56:00.445056 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-19 04:56:00.445076 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-19 04:56:00.445094 | orchestrator | 2026-03-19 04:56:00.445112 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-19 04:56:00.445130 | orchestrator | Thursday 19 March 2026 04:55:56 +0000 (0:00:00.828) 0:19:49.742 ******** 2026-03-19 04:56:00.445148 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 04:56:00.445168 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 04:56:00.445188 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 04:56:00.445205 | orchestrator | 2026-03-19 04:56:00.445224 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-19 04:56:00.445241 | orchestrator | Thursday 19 March 2026 04:55:58 +0000 (0:00:02.379) 0:19:52.121 ******** 2026-03-19 04:56:00.445260 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-19 04:56:00.445279 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 04:56:00.445297 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:56:00.445316 | orchestrator | 2026-03-19 04:56:00.445333 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-19 04:56:00.445351 | orchestrator | Thursday 19 March 2026 04:55:59 +0000 (0:00:01.002) 0:19:53.124 ******** 2026-03-19 04:56:00.445370 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.445402 | orchestrator | 2026-03-19 04:56:00.445420 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-19 04:56:00.445435 | orchestrator | Thursday 19 March 2026 04:56:00 +0000 (0:00:00.269) 0:19:53.393 ******** 2026-03-19 04:56:00.445446 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.445457 | orchestrator | 2026-03-19 04:56:00.445468 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-19 04:56:00.445479 | orchestrator | Thursday 19 March 2026 04:56:00 +0000 (0:00:00.158) 0:19:53.552 ******** 2026-03-19 04:56:00.445490 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:56:00.445500 | orchestrator | 2026-03-19 04:56:00.445521 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-19 04:58:06.795014 | orchestrator | Thursday 19 March 2026 04:56:00 +0000 (0:00:00.142) 0:19:53.694 ******** 2026-03-19 04:58:06.795133 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-19 04:58:06.795151 | orchestrator | 2026-03-19 04:58:06.795164 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-19 04:58:06.795175 | orchestrator | Thursday 19 March 2026 04:56:00 +0000 (0:00:00.203) 0:19:53.898 ******** 2026-03-19 04:58:06.795186 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.795198 | orchestrator | 2026-03-19 04:58:06.795209 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-19 04:58:06.795234 | orchestrator | Thursday 19 March 2026 04:56:01 +0000 (0:00:00.484) 0:19:54.382 ******** 2026-03-19 04:58:06.795246 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.795257 | orchestrator | 2026-03-19 04:58:06.795268 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-19 04:58:06.795279 | orchestrator | Thursday 19 March 2026 04:56:03 +0000 (0:00:02.559) 0:19:56.941 ******** 2026-03-19 04:58:06.795290 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-19 04:58:06.795300 | orchestrator | 2026-03-19 04:58:06.795311 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-19 04:58:06.795322 | orchestrator | Thursday 19 March 2026 04:56:04 +0000 (0:00:00.491) 0:19:57.433 ******** 2026-03-19 04:58:06.795333 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.795344 | orchestrator | 2026-03-19 04:58:06.795354 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-19 04:58:06.795365 | orchestrator | Thursday 19 March 2026 04:56:05 +0000 (0:00:01.013) 0:19:58.447 ******** 2026-03-19 04:58:06.795376 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.795387 | orchestrator | 2026-03-19 04:58:06.795398 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-19 04:58:06.795408 | orchestrator | Thursday 19 March 2026 04:56:06 +0000 (0:00:00.943) 0:19:59.390 ******** 2026-03-19 04:58:06.795419 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.795430 | orchestrator | 2026-03-19 04:58:06.795441 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-19 04:58:06.795461 | orchestrator | Thursday 19 March 2026 04:56:07 +0000 (0:00:01.269) 0:20:00.659 ******** 2026-03-19 04:58:06.795485 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.795506 | orchestrator | 2026-03-19 04:58:06.795524 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-19 04:58:06.795543 | orchestrator | Thursday 19 March 2026 04:56:07 +0000 (0:00:00.147) 0:20:00.806 ******** 2026-03-19 04:58:06.795562 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.795580 | orchestrator | 2026-03-19 04:58:06.795599 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-19 04:58:06.795618 | orchestrator | Thursday 19 March 2026 04:56:07 +0000 (0:00:00.139) 0:20:00.946 ******** 2026-03-19 04:58:06.795636 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-19 04:58:06.795656 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-19 04:58:06.795676 | orchestrator | 2026-03-19 04:58:06.795699 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-19 04:58:06.795783 | orchestrator | Thursday 19 March 2026 04:56:08 +0000 (0:00:00.894) 0:20:01.840 ******** 2026-03-19 04:58:06.795824 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-19 04:58:06.795838 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-19 04:58:06.795851 | orchestrator | 2026-03-19 04:58:06.795868 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-19 04:58:06.795888 | orchestrator | Thursday 19 March 2026 04:56:10 +0000 (0:00:01.885) 0:20:03.725 ******** 2026-03-19 04:58:06.795902 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-19 04:58:06.795913 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-19 04:58:06.795924 | orchestrator | 2026-03-19 04:58:06.795935 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-19 04:58:06.795946 | orchestrator | Thursday 19 March 2026 04:56:14 +0000 (0:00:03.667) 0:20:07.393 ******** 2026-03-19 04:58:06.795956 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.795967 | orchestrator | 2026-03-19 04:58:06.795978 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-19 04:58:06.795989 | orchestrator | Thursday 19 March 2026 04:56:14 +0000 (0:00:00.251) 0:20:07.644 ******** 2026-03-19 04:58:06.795999 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-19 04:58:06.796012 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:58:06.796023 | orchestrator | 2026-03-19 04:58:06.796034 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-19 04:58:06.796044 | orchestrator | Thursday 19 March 2026 04:56:26 +0000 (0:00:12.454) 0:20:20.099 ******** 2026-03-19 04:58:06.796055 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.796066 | orchestrator | 2026-03-19 04:58:06.796089 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-19 04:58:06.796101 | orchestrator | Thursday 19 March 2026 04:56:27 +0000 (0:00:00.303) 0:20:20.402 ******** 2026-03-19 04:58:06.796112 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.796123 | orchestrator | 2026-03-19 04:58:06.796133 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-19 04:58:06.796144 | orchestrator | Thursday 19 March 2026 04:56:27 +0000 (0:00:00.425) 0:20:20.827 ******** 2026-03-19 04:58:06.796155 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:06.796166 | orchestrator | 2026-03-19 04:58:06.796176 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-19 04:58:06.796187 | orchestrator | Thursday 19 March 2026 04:56:27 +0000 (0:00:00.128) 0:20:20.955 ******** 2026-03-19 04:58:06.796198 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-19 04:58:06.796209 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-19 04:58:06.796241 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:58:06.796253 | orchestrator | 2026-03-19 04:58:06.796264 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-19 04:58:06.796275 | orchestrator | 2026-03-19 04:58:06.796285 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:58:06.796296 | orchestrator | Thursday 19 March 2026 04:56:35 +0000 (0:00:07.922) 0:20:28.878 ******** 2026-03-19 04:58:06.796307 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:58:06.796318 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:58:06.796329 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.796339 | orchestrator | 2026-03-19 04:58:06.796350 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:58:06.796361 | orchestrator | Thursday 19 March 2026 04:56:36 +0000 (0:00:00.694) 0:20:29.573 ******** 2026-03-19 04:58:06.796388 | orchestrator | ok: [testbed-node-3] 2026-03-19 04:58:06.796411 | orchestrator | ok: [testbed-node-4] 2026-03-19 04:58:06.796424 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:06.796442 | orchestrator | 2026-03-19 04:58:06.796461 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-19 04:58:06.796495 | orchestrator | Thursday 19 March 2026 04:56:37 +0000 (0:00:00.813) 0:20:30.387 ******** 2026-03-19 04:58:06.796513 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-19 04:58:06.796530 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-19 04:58:06.796549 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-19 04:58:06.796567 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-19 04:58:06.796587 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-19 04:58:06.796605 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-19 04:58:06.796623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-19 04:58:06.796641 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-19 04:58:06.796661 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-19 04:58:06.796679 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-19 04:58:06.796699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-19 04:58:06.796767 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-19 04:58:06.796788 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-19 04:58:06.796807 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-19 04:58:06.796826 | orchestrator | 2026-03-19 04:58:06.796846 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-19 04:58:06.796864 | orchestrator | Thursday 19 March 2026 04:57:56 +0000 (0:01:19.062) 0:21:49.450 ******** 2026-03-19 04:58:06.796884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-19 04:58:06.796902 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-19 04:58:06.796920 | orchestrator | 2026-03-19 04:58:06.796937 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-19 04:58:06.796956 | orchestrator | Thursday 19 March 2026 04:58:02 +0000 (0:00:06.308) 0:21:55.758 ******** 2026-03-19 04:58:06.796975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:58:06.796992 | orchestrator | 2026-03-19 04:58:06.797011 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-19 04:58:06.797031 | orchestrator | 2026-03-19 04:58:06.797049 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:58:06.797068 | orchestrator | Thursday 19 March 2026 04:58:05 +0000 (0:00:02.589) 0:21:58.348 ******** 2026-03-19 04:58:06.797080 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-19 04:58:06.797091 | orchestrator | 2026-03-19 04:58:06.797102 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:58:06.797121 | orchestrator | Thursday 19 March 2026 04:58:05 +0000 (0:00:00.278) 0:21:58.626 ******** 2026-03-19 04:58:06.797139 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:06.797157 | orchestrator | 2026-03-19 04:58:06.797175 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:58:06.797193 | orchestrator | Thursday 19 March 2026 04:58:05 +0000 (0:00:00.523) 0:21:59.150 ******** 2026-03-19 04:58:06.797213 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:06.797232 | orchestrator | 2026-03-19 04:58:06.797263 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:58:06.797282 | orchestrator | Thursday 19 March 2026 04:58:06 +0000 (0:00:00.151) 0:21:59.301 ******** 2026-03-19 04:58:06.797300 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:06.797319 | orchestrator | 2026-03-19 04:58:06.797336 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:58:06.797353 | orchestrator | Thursday 19 March 2026 04:58:06 +0000 (0:00:00.573) 0:21:59.875 ******** 2026-03-19 04:58:06.797364 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:06.797375 | orchestrator | 2026-03-19 04:58:06.797399 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:58:15.188785 | orchestrator | Thursday 19 March 2026 04:58:06 +0000 (0:00:00.170) 0:22:00.046 ******** 2026-03-19 04:58:15.188880 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.188893 | orchestrator | 2026-03-19 04:58:15.188900 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:58:15.188905 | orchestrator | Thursday 19 March 2026 04:58:06 +0000 (0:00:00.166) 0:22:00.212 ******** 2026-03-19 04:58:15.188909 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.188913 | orchestrator | 2026-03-19 04:58:15.188917 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:58:15.188922 | orchestrator | Thursday 19 March 2026 04:58:07 +0000 (0:00:00.455) 0:22:00.668 ******** 2026-03-19 04:58:15.188926 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.188932 | orchestrator | 2026-03-19 04:58:15.188935 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:58:15.188939 | orchestrator | Thursday 19 March 2026 04:58:07 +0000 (0:00:00.167) 0:22:00.835 ******** 2026-03-19 04:58:15.188943 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.188947 | orchestrator | 2026-03-19 04:58:15.188951 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:58:15.188954 | orchestrator | Thursday 19 March 2026 04:58:07 +0000 (0:00:00.153) 0:22:00.988 ******** 2026-03-19 04:58:15.188958 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:58:15.188962 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:15.188966 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:15.188970 | orchestrator | 2026-03-19 04:58:15.188974 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:58:15.188977 | orchestrator | Thursday 19 March 2026 04:58:08 +0000 (0:00:00.689) 0:22:01.678 ******** 2026-03-19 04:58:15.188981 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.188985 | orchestrator | 2026-03-19 04:58:15.188989 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:58:15.188992 | orchestrator | Thursday 19 March 2026 04:58:08 +0000 (0:00:00.274) 0:22:01.952 ******** 2026-03-19 04:58:15.188996 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:58:15.189000 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:15.189003 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:15.189007 | orchestrator | 2026-03-19 04:58:15.189011 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:58:15.189015 | orchestrator | Thursday 19 March 2026 04:58:10 +0000 (0:00:02.054) 0:22:04.007 ******** 2026-03-19 04:58:15.189019 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:58:15.189023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:58:15.189027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:58:15.189031 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.189035 | orchestrator | 2026-03-19 04:58:15.189051 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:58:15.189055 | orchestrator | Thursday 19 March 2026 04:58:11 +0000 (0:00:00.513) 0:22:04.520 ******** 2026-03-19 04:58:15.189076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189087 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189091 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.189094 | orchestrator | 2026-03-19 04:58:15.189098 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:58:15.189102 | orchestrator | Thursday 19 March 2026 04:58:11 +0000 (0:00:00.693) 0:22:05.214 ******** 2026-03-19 04:58:15.189107 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189128 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:15.189132 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.189136 | orchestrator | 2026-03-19 04:58:15.189140 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:58:15.189144 | orchestrator | Thursday 19 March 2026 04:58:12 +0000 (0:00:00.216) 0:22:05.430 ******** 2026-03-19 04:58:15.189149 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:58:09.292695', 'end': '2026-03-19 04:58:09.345624', 'delta': '0:00:00.052929', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:15.189156 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:58:09.928997', 'end': '2026-03-19 04:58:09.982089', 'delta': '0:00:00.053092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:15.189166 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:58:10.538070', 'end': '2026-03-19 04:58:10.587084', 'delta': '0:00:00.049014', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:15.189171 | orchestrator | 2026-03-19 04:58:15.189175 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:58:15.189178 | orchestrator | Thursday 19 March 2026 04:58:12 +0000 (0:00:00.229) 0:22:05.659 ******** 2026-03-19 04:58:15.189182 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.189186 | orchestrator | 2026-03-19 04:58:15.189190 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:58:15.189193 | orchestrator | Thursday 19 March 2026 04:58:12 +0000 (0:00:00.287) 0:22:05.947 ******** 2026-03-19 04:58:15.189197 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.189201 | orchestrator | 2026-03-19 04:58:15.189205 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:58:15.189208 | orchestrator | Thursday 19 March 2026 04:58:12 +0000 (0:00:00.299) 0:22:06.246 ******** 2026-03-19 04:58:15.189212 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.189216 | orchestrator | 2026-03-19 04:58:15.189220 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:58:15.189224 | orchestrator | Thursday 19 March 2026 04:58:13 +0000 (0:00:00.152) 0:22:06.398 ******** 2026-03-19 04:58:15.189227 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.189231 | orchestrator | 2026-03-19 04:58:15.189235 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:58:15.189239 | orchestrator | Thursday 19 March 2026 04:58:14 +0000 (0:00:01.735) 0:22:08.134 ******** 2026-03-19 04:58:15.189242 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:15.189246 | orchestrator | 2026-03-19 04:58:15.189250 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:58:15.189254 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.168) 0:22:08.303 ******** 2026-03-19 04:58:15.189257 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:15.189261 | orchestrator | 2026-03-19 04:58:15.189267 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:58:17.174240 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.142) 0:22:08.445 ******** 2026-03-19 04:58:17.174338 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.174350 | orchestrator | 2026-03-19 04:58:17.174359 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:58:17.175199 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.280) 0:22:08.726 ******** 2026-03-19 04:58:17.175229 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175239 | orchestrator | 2026-03-19 04:58:17.175247 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:58:17.175255 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.151) 0:22:08.877 ******** 2026-03-19 04:58:17.175262 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175269 | orchestrator | 2026-03-19 04:58:17.175277 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:58:17.175309 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.158) 0:22:09.036 ******** 2026-03-19 04:58:17.175317 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175324 | orchestrator | 2026-03-19 04:58:17.175331 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:58:17.175339 | orchestrator | Thursday 19 March 2026 04:58:15 +0000 (0:00:00.150) 0:22:09.186 ******** 2026-03-19 04:58:17.175346 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175353 | orchestrator | 2026-03-19 04:58:17.175361 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:58:17.175368 | orchestrator | Thursday 19 March 2026 04:58:16 +0000 (0:00:00.159) 0:22:09.346 ******** 2026-03-19 04:58:17.175375 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175382 | orchestrator | 2026-03-19 04:58:17.175390 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:58:17.175397 | orchestrator | Thursday 19 March 2026 04:58:16 +0000 (0:00:00.136) 0:22:09.482 ******** 2026-03-19 04:58:17.175404 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175411 | orchestrator | 2026-03-19 04:58:17.175419 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:58:17.175426 | orchestrator | Thursday 19 March 2026 04:58:16 +0000 (0:00:00.175) 0:22:09.658 ******** 2026-03-19 04:58:17.175434 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175441 | orchestrator | 2026-03-19 04:58:17.175448 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:58:17.175455 | orchestrator | Thursday 19 March 2026 04:58:16 +0000 (0:00:00.142) 0:22:09.800 ******** 2026-03-19 04:58:17.175465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:58:17.175516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:58:17.175581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:17.175597 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:17.175604 | orchestrator | 2026-03-19 04:58:17.175616 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:58:17.175624 | orchestrator | Thursday 19 March 2026 04:58:16 +0000 (0:00:00.314) 0:22:10.114 ******** 2026-03-19 04:58:17.175650 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725599 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725608 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725669 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '29171f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1', 'scsi-SQEMU_QEMU_HARDDISK_29171f1c-6cc3-40cd-9178-0fa38eeda372-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725680 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725688 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:18.725762 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:18.725777 | orchestrator | 2026-03-19 04:58:18.725785 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:58:18.725795 | orchestrator | Thursday 19 March 2026 04:58:17 +0000 (0:00:00.318) 0:22:10.433 ******** 2026-03-19 04:58:18.725803 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:18.725812 | orchestrator | 2026-03-19 04:58:18.725819 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:58:18.725827 | orchestrator | Thursday 19 March 2026 04:58:18 +0000 (0:00:00.878) 0:22:11.312 ******** 2026-03-19 04:58:18.725834 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:18.725842 | orchestrator | 2026-03-19 04:58:18.725850 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:58:18.725857 | orchestrator | Thursday 19 March 2026 04:58:18 +0000 (0:00:00.149) 0:22:11.462 ******** 2026-03-19 04:58:18.725865 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:18.725873 | orchestrator | 2026-03-19 04:58:18.725881 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:58:18.725895 | orchestrator | Thursday 19 March 2026 04:58:18 +0000 (0:00:00.521) 0:22:11.983 ******** 2026-03-19 04:58:48.477218 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:48.477338 | orchestrator | 2026-03-19 04:58:48.477355 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:58:48.477368 | orchestrator | Thursday 19 March 2026 04:58:18 +0000 (0:00:00.152) 0:22:12.136 ******** 2026-03-19 04:58:48.477380 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:48.477392 | orchestrator | 2026-03-19 04:58:48.477403 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:58:48.477415 | orchestrator | Thursday 19 March 2026 04:58:19 +0000 (0:00:00.257) 0:22:12.393 ******** 2026-03-19 04:58:48.477426 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:48.477438 | orchestrator | 2026-03-19 04:58:48.477449 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:58:48.477460 | orchestrator | Thursday 19 March 2026 04:58:19 +0000 (0:00:00.152) 0:22:12.546 ******** 2026-03-19 04:58:48.477472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:58:48.477483 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-19 04:58:48.477495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-19 04:58:48.477506 | orchestrator | 2026-03-19 04:58:48.477517 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:58:48.477529 | orchestrator | Thursday 19 March 2026 04:58:20 +0000 (0:00:00.720) 0:22:13.266 ******** 2026-03-19 04:58:48.477540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-19 04:58:48.477552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-19 04:58:48.477563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-19 04:58:48.477575 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:48.477586 | orchestrator | 2026-03-19 04:58:48.477597 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:58:48.477609 | orchestrator | Thursday 19 March 2026 04:58:20 +0000 (0:00:00.220) 0:22:13.487 ******** 2026-03-19 04:58:48.477620 | orchestrator | skipping: [testbed-node-0] 2026-03-19 04:58:48.477631 | orchestrator | 2026-03-19 04:58:48.477643 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:58:48.477654 | orchestrator | Thursday 19 March 2026 04:58:20 +0000 (0:00:00.151) 0:22:13.639 ******** 2026-03-19 04:58:48.477665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:58:48.477704 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:48.477777 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:48.477790 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:58:48.477804 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:58:48.477816 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:58:48.477829 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:58:48.477842 | orchestrator | 2026-03-19 04:58:48.477855 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:58:48.477868 | orchestrator | Thursday 19 March 2026 04:58:21 +0000 (0:00:01.110) 0:22:14.749 ******** 2026-03-19 04:58:48.477881 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-19 04:58:48.477893 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:48.477905 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:48.477918 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:58:48.477930 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:58:48.477943 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 04:58:48.477956 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:58:48.477968 | orchestrator | 2026-03-19 04:58:48.477981 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-19 04:58:48.477993 | orchestrator | Thursday 19 March 2026 04:58:23 +0000 (0:00:01.788) 0:22:16.537 ******** 2026-03-19 04:58:48.478006 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:48.478080 | orchestrator | 2026-03-19 04:58:48.478094 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-19 04:58:48.478106 | orchestrator | Thursday 19 March 2026 04:58:25 +0000 (0:00:02.341) 0:22:18.879 ******** 2026-03-19 04:58:48.478117 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:48.478128 | orchestrator | 2026-03-19 04:58:48.478139 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-19 04:58:48.478150 | orchestrator | Thursday 19 March 2026 04:58:27 +0000 (0:00:02.214) 0:22:21.093 ******** 2026-03-19 04:58:48.478161 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:48.478172 | orchestrator | 2026-03-19 04:58:48.478183 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-19 04:58:48.478194 | orchestrator | Thursday 19 March 2026 04:58:29 +0000 (0:00:01.503) 0:22:22.596 ******** 2026-03-19 04:58:48.478227 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4742', 'value': {'gid': 4742, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/1971556707', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 1971556707}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 1971556707}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-19 04:58:48.478243 | orchestrator | 2026-03-19 04:58:48.478254 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-19 04:58:48.478265 | orchestrator | Thursday 19 March 2026 04:58:29 +0000 (0:00:00.198) 0:22:22.795 ******** 2026-03-19 04:58:48.478286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-19 04:58:48.478297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-19 04:58:48.478308 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-03-19 04:58:48.478319 | orchestrator | 2026-03-19 04:58:48.478330 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-19 04:58:48.478341 | orchestrator | Thursday 19 March 2026 04:58:30 +0000 (0:00:00.552) 0:22:23.348 ******** 2026-03-19 04:58:48.478352 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-19 04:58:48.478363 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-03-19 04:58:48.478374 | orchestrator | 2026-03-19 04:58:48.478385 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-19 04:58:48.478396 | orchestrator | Thursday 19 March 2026 04:58:30 +0000 (0:00:00.542) 0:22:23.891 ******** 2026-03-19 04:58:48.478407 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:58:48.478417 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:58:48.478428 | orchestrator | 2026-03-19 04:58:48.478439 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-19 04:58:48.478450 | orchestrator | Thursday 19 March 2026 04:58:41 +0000 (0:00:10.436) 0:22:34.327 ******** 2026-03-19 04:58:48.478462 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:58:48.478478 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:58:48.478490 | orchestrator | 2026-03-19 04:58:48.478501 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-19 04:58:48.478512 | orchestrator | Thursday 19 March 2026 04:58:43 +0000 (0:00:02.863) 0:22:37.191 ******** 2026-03-19 04:58:48.478523 | orchestrator | ok: [testbed-node-0] 2026-03-19 04:58:48.478534 | orchestrator | 2026-03-19 04:58:48.478545 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-19 04:58:48.478555 | orchestrator | Thursday 19 March 2026 04:58:45 +0000 (0:00:01.354) 0:22:38.546 ******** 2026-03-19 04:58:48.478566 | orchestrator | changed: [testbed-node-0] 2026-03-19 04:58:48.478577 | orchestrator | 2026-03-19 04:58:48.478588 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-19 04:58:48.478599 | orchestrator | 2026-03-19 04:58:48.478610 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 04:58:48.478621 | orchestrator | Thursday 19 March 2026 04:58:46 +0000 (0:00:00.814) 0:22:39.360 ******** 2026-03-19 04:58:48.478632 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-19 04:58:48.478643 | orchestrator | 2026-03-19 04:58:48.478665 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 04:58:48.478676 | orchestrator | Thursday 19 March 2026 04:58:46 +0000 (0:00:00.239) 0:22:39.600 ******** 2026-03-19 04:58:48.478687 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478698 | orchestrator | 2026-03-19 04:58:48.478726 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 04:58:48.478738 | orchestrator | Thursday 19 March 2026 04:58:46 +0000 (0:00:00.472) 0:22:40.073 ******** 2026-03-19 04:58:48.478749 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478760 | orchestrator | 2026-03-19 04:58:48.478771 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 04:58:48.478782 | orchestrator | Thursday 19 March 2026 04:58:47 +0000 (0:00:00.417) 0:22:40.491 ******** 2026-03-19 04:58:48.478793 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478804 | orchestrator | 2026-03-19 04:58:48.478814 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 04:58:48.478826 | orchestrator | Thursday 19 March 2026 04:58:47 +0000 (0:00:00.467) 0:22:40.959 ******** 2026-03-19 04:58:48.478837 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478854 | orchestrator | 2026-03-19 04:58:48.478865 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 04:58:48.478876 | orchestrator | Thursday 19 March 2026 04:58:47 +0000 (0:00:00.144) 0:22:41.104 ******** 2026-03-19 04:58:48.478888 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478898 | orchestrator | 2026-03-19 04:58:48.478909 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 04:58:48.478920 | orchestrator | Thursday 19 March 2026 04:58:47 +0000 (0:00:00.150) 0:22:41.254 ******** 2026-03-19 04:58:48.478931 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.478942 | orchestrator | 2026-03-19 04:58:48.478953 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 04:58:48.478964 | orchestrator | Thursday 19 March 2026 04:58:48 +0000 (0:00:00.180) 0:22:41.435 ******** 2026-03-19 04:58:48.478975 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:48.478986 | orchestrator | 2026-03-19 04:58:48.478997 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 04:58:48.479008 | orchestrator | Thursday 19 March 2026 04:58:48 +0000 (0:00:00.148) 0:22:41.584 ******** 2026-03-19 04:58:48.479020 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:48.479031 | orchestrator | 2026-03-19 04:58:48.479049 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 04:58:56.768318 | orchestrator | Thursday 19 March 2026 04:58:48 +0000 (0:00:00.145) 0:22:41.730 ******** 2026-03-19 04:58:56.768433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:58:56.768449 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:56.768462 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:56.768474 | orchestrator | 2026-03-19 04:58:56.768486 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 04:58:56.768498 | orchestrator | Thursday 19 March 2026 04:58:49 +0000 (0:00:00.653) 0:22:42.384 ******** 2026-03-19 04:58:56.768509 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:56.768520 | orchestrator | 2026-03-19 04:58:56.768531 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 04:58:56.768542 | orchestrator | Thursday 19 March 2026 04:58:49 +0000 (0:00:00.267) 0:22:42.651 ******** 2026-03-19 04:58:56.768553 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:58:56.768564 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:58:56.768575 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:58:56.768586 | orchestrator | 2026-03-19 04:58:56.768597 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 04:58:56.768607 | orchestrator | Thursday 19 March 2026 04:58:51 +0000 (0:00:02.272) 0:22:44.924 ******** 2026-03-19 04:58:56.768619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 04:58:56.768630 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 04:58:56.768641 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 04:58:56.768704 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.768767 | orchestrator | 2026-03-19 04:58:56.768779 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 04:58:56.768790 | orchestrator | Thursday 19 March 2026 04:58:52 +0000 (0:00:00.415) 0:22:45.339 ******** 2026-03-19 04:58:56.768819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768884 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.768897 | orchestrator | 2026-03-19 04:58:56.768910 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 04:58:56.768924 | orchestrator | Thursday 19 March 2026 04:58:53 +0000 (0:00:00.950) 0:22:46.290 ******** 2026-03-19 04:58:56.768938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:56.768982 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.768995 | orchestrator | 2026-03-19 04:58:56.769008 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 04:58:56.769022 | orchestrator | Thursday 19 March 2026 04:58:53 +0000 (0:00:00.172) 0:22:46.463 ******** 2026-03-19 04:58:56.769055 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 04:58:49.977912', 'end': '2026-03-19 04:58:50.040316', 'delta': '0:00:00.062404', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:56.769070 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 04:58:50.558966', 'end': '2026-03-19 04:58:50.610261', 'delta': '0:00:00.051295', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:56.769087 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 04:58:51.447368', 'end': '2026-03-19 04:58:51.508884', 'delta': '0:00:00.061516', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 04:58:56.769108 | orchestrator | 2026-03-19 04:58:56.769119 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 04:58:56.769130 | orchestrator | Thursday 19 March 2026 04:58:53 +0000 (0:00:00.193) 0:22:46.657 ******** 2026-03-19 04:58:56.769141 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:56.769153 | orchestrator | 2026-03-19 04:58:56.769164 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 04:58:56.769175 | orchestrator | Thursday 19 March 2026 04:58:54 +0000 (0:00:00.898) 0:22:47.555 ******** 2026-03-19 04:58:56.769186 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.769197 | orchestrator | 2026-03-19 04:58:56.769208 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 04:58:56.769218 | orchestrator | Thursday 19 March 2026 04:58:54 +0000 (0:00:00.268) 0:22:47.823 ******** 2026-03-19 04:58:56.769229 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:56.769240 | orchestrator | 2026-03-19 04:58:56.769251 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 04:58:56.769262 | orchestrator | Thursday 19 March 2026 04:58:54 +0000 (0:00:00.150) 0:22:47.974 ******** 2026-03-19 04:58:56.769273 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 04:58:56.769284 | orchestrator | 2026-03-19 04:58:56.769295 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:58:56.769306 | orchestrator | Thursday 19 March 2026 04:58:55 +0000 (0:00:01.028) 0:22:49.002 ******** 2026-03-19 04:58:56.769317 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:56.769327 | orchestrator | 2026-03-19 04:58:56.769338 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 04:58:56.769349 | orchestrator | Thursday 19 March 2026 04:58:55 +0000 (0:00:00.149) 0:22:49.152 ******** 2026-03-19 04:58:56.769360 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.769371 | orchestrator | 2026-03-19 04:58:56.769382 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 04:58:56.769392 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.133) 0:22:49.285 ******** 2026-03-19 04:58:56.769403 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.769414 | orchestrator | 2026-03-19 04:58:56.769425 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 04:58:56.769436 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.244) 0:22:49.529 ******** 2026-03-19 04:58:56.769447 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.769458 | orchestrator | 2026-03-19 04:58:56.769469 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 04:58:56.769480 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.174) 0:22:49.704 ******** 2026-03-19 04:58:56.769491 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:56.769502 | orchestrator | 2026-03-19 04:58:56.769513 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 04:58:56.769524 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.127) 0:22:49.832 ******** 2026-03-19 04:58:56.769542 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:57.918293 | orchestrator | 2026-03-19 04:58:57.918383 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 04:58:57.918393 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.193) 0:22:50.025 ******** 2026-03-19 04:58:57.918399 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:57.918408 | orchestrator | 2026-03-19 04:58:57.918437 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 04:58:57.918444 | orchestrator | Thursday 19 March 2026 04:58:56 +0000 (0:00:00.110) 0:22:50.136 ******** 2026-03-19 04:58:57.918451 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:57.918458 | orchestrator | 2026-03-19 04:58:57.918464 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 04:58:57.918470 | orchestrator | Thursday 19 March 2026 04:58:57 +0000 (0:00:00.213) 0:22:50.349 ******** 2026-03-19 04:58:57.918476 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:57.918482 | orchestrator | 2026-03-19 04:58:57.918488 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 04:58:57.918496 | orchestrator | Thursday 19 March 2026 04:58:57 +0000 (0:00:00.114) 0:22:50.464 ******** 2026-03-19 04:58:57.918502 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:58:57.918508 | orchestrator | 2026-03-19 04:58:57.918514 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 04:58:57.918520 | orchestrator | Thursday 19 March 2026 04:58:57 +0000 (0:00:00.486) 0:22:50.950 ******** 2026-03-19 04:58:57.918529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}})  2026-03-19 04:58:57.918565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:58:57.918573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}})  2026-03-19 04:58:57.918582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 04:58:57.918625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:57.918648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}})  2026-03-19 04:58:57.918654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}})  2026-03-19 04:58:57.918671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:58.266962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 04:58:58.267063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:58.267077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 04:58:58.267086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 04:58:58.267118 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:58:58.267129 | orchestrator | 2026-03-19 04:58:58.267138 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 04:58:58.267146 | orchestrator | Thursday 19 March 2026 04:58:58 +0000 (0:00:00.363) 0:22:51.314 ******** 2026-03-19 04:58:58.267173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.267183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.267199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.267208 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.267224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.267238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477446 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:58:58.477516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:59:08.965928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 04:59:08.966075 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966091 | orchestrator | 2026-03-19 04:59:08.966102 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 04:59:08.966111 | orchestrator | Thursday 19 March 2026 04:58:58 +0000 (0:00:00.418) 0:22:51.732 ******** 2026-03-19 04:59:08.966119 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.966128 | orchestrator | 2026-03-19 04:59:08.966136 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 04:59:08.966144 | orchestrator | Thursday 19 March 2026 04:58:58 +0000 (0:00:00.519) 0:22:52.251 ******** 2026-03-19 04:59:08.966152 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.966160 | orchestrator | 2026-03-19 04:59:08.966168 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:59:08.966176 | orchestrator | Thursday 19 March 2026 04:58:59 +0000 (0:00:00.132) 0:22:52.384 ******** 2026-03-19 04:59:08.966184 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.966192 | orchestrator | 2026-03-19 04:59:08.966200 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:59:08.966225 | orchestrator | Thursday 19 March 2026 04:58:59 +0000 (0:00:00.511) 0:22:52.895 ******** 2026-03-19 04:59:08.966234 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966242 | orchestrator | 2026-03-19 04:59:08.966250 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 04:59:08.966257 | orchestrator | Thursday 19 March 2026 04:58:59 +0000 (0:00:00.127) 0:22:53.022 ******** 2026-03-19 04:59:08.966265 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966273 | orchestrator | 2026-03-19 04:59:08.966281 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 04:59:08.966289 | orchestrator | Thursday 19 March 2026 04:59:00 +0000 (0:00:00.269) 0:22:53.292 ******** 2026-03-19 04:59:08.966297 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966305 | orchestrator | 2026-03-19 04:59:08.966313 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 04:59:08.966320 | orchestrator | Thursday 19 March 2026 04:59:00 +0000 (0:00:00.178) 0:22:53.470 ******** 2026-03-19 04:59:08.966328 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 04:59:08.966336 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 04:59:08.966344 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 04:59:08.966352 | orchestrator | 2026-03-19 04:59:08.966360 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 04:59:08.966368 | orchestrator | Thursday 19 March 2026 04:59:01 +0000 (0:00:01.075) 0:22:54.546 ******** 2026-03-19 04:59:08.966376 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 04:59:08.966384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 04:59:08.966392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 04:59:08.966400 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966408 | orchestrator | 2026-03-19 04:59:08.966416 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 04:59:08.966424 | orchestrator | Thursday 19 March 2026 04:59:01 +0000 (0:00:00.175) 0:22:54.722 ******** 2026-03-19 04:59:08.966432 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-19 04:59:08.966440 | orchestrator | 2026-03-19 04:59:08.966449 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 04:59:08.966458 | orchestrator | Thursday 19 March 2026 04:59:01 +0000 (0:00:00.247) 0:22:54.969 ******** 2026-03-19 04:59:08.966466 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966473 | orchestrator | 2026-03-19 04:59:08.966481 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 04:59:08.966489 | orchestrator | Thursday 19 March 2026 04:59:02 +0000 (0:00:00.480) 0:22:55.450 ******** 2026-03-19 04:59:08.966497 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966505 | orchestrator | 2026-03-19 04:59:08.966513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 04:59:08.966521 | orchestrator | Thursday 19 March 2026 04:59:02 +0000 (0:00:00.156) 0:22:55.607 ******** 2026-03-19 04:59:08.966529 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966537 | orchestrator | 2026-03-19 04:59:08.966544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 04:59:08.966552 | orchestrator | Thursday 19 March 2026 04:59:02 +0000 (0:00:00.154) 0:22:55.762 ******** 2026-03-19 04:59:08.966560 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.966568 | orchestrator | 2026-03-19 04:59:08.966576 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 04:59:08.966583 | orchestrator | Thursday 19 March 2026 04:59:02 +0000 (0:00:00.232) 0:22:55.994 ******** 2026-03-19 04:59:08.966591 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:59:08.966615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:59:08.966623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:59:08.966637 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966646 | orchestrator | 2026-03-19 04:59:08.966654 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 04:59:08.966662 | orchestrator | Thursday 19 March 2026 04:59:03 +0000 (0:00:00.412) 0:22:56.407 ******** 2026-03-19 04:59:08.966670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:59:08.966678 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:59:08.966685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:59:08.966693 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966701 | orchestrator | 2026-03-19 04:59:08.966741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 04:59:08.966749 | orchestrator | Thursday 19 March 2026 04:59:03 +0000 (0:00:00.393) 0:22:56.800 ******** 2026-03-19 04:59:08.966757 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 04:59:08.966771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 04:59:08.966779 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 04:59:08.966787 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.966795 | orchestrator | 2026-03-19 04:59:08.966803 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 04:59:08.966811 | orchestrator | Thursday 19 March 2026 04:59:03 +0000 (0:00:00.425) 0:22:57.226 ******** 2026-03-19 04:59:08.966819 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.966827 | orchestrator | 2026-03-19 04:59:08.966835 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 04:59:08.966843 | orchestrator | Thursday 19 March 2026 04:59:04 +0000 (0:00:00.169) 0:22:57.395 ******** 2026-03-19 04:59:08.966851 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 04:59:08.966858 | orchestrator | 2026-03-19 04:59:08.966866 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 04:59:08.966874 | orchestrator | Thursday 19 March 2026 04:59:04 +0000 (0:00:00.370) 0:22:57.766 ******** 2026-03-19 04:59:08.966882 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:59:08.966890 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:59:08.966898 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:59:08.966906 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:59:08.966914 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:59:08.966922 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 04:59:08.966930 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:59:08.966937 | orchestrator | 2026-03-19 04:59:08.966945 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 04:59:08.966953 | orchestrator | Thursday 19 March 2026 04:59:05 +0000 (0:00:01.239) 0:22:59.006 ******** 2026-03-19 04:59:08.966961 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 04:59:08.966969 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 04:59:08.966977 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 04:59:08.966985 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 04:59:08.966993 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 04:59:08.967001 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 04:59:08.967008 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 04:59:08.967016 | orchestrator | 2026-03-19 04:59:08.967024 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-19 04:59:08.967038 | orchestrator | Thursday 19 March 2026 04:59:07 +0000 (0:00:01.645) 0:23:00.651 ******** 2026-03-19 04:59:08.967046 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.967053 | orchestrator | 2026-03-19 04:59:08.967061 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 04:59:08.967069 | orchestrator | Thursday 19 March 2026 04:59:07 +0000 (0:00:00.128) 0:23:00.780 ******** 2026-03-19 04:59:08.967077 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-19 04:59:08.967085 | orchestrator | 2026-03-19 04:59:08.967093 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 04:59:08.967101 | orchestrator | Thursday 19 March 2026 04:59:08 +0000 (0:00:00.553) 0:23:01.333 ******** 2026-03-19 04:59:08.967109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-19 04:59:08.967117 | orchestrator | 2026-03-19 04:59:08.967125 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 04:59:08.967132 | orchestrator | Thursday 19 March 2026 04:59:08 +0000 (0:00:00.215) 0:23:01.548 ******** 2026-03-19 04:59:08.967140 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:08.967148 | orchestrator | 2026-03-19 04:59:08.967156 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 04:59:08.967164 | orchestrator | Thursday 19 March 2026 04:59:08 +0000 (0:00:00.141) 0:23:01.690 ******** 2026-03-19 04:59:08.967172 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:08.967180 | orchestrator | 2026-03-19 04:59:08.967188 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 04:59:08.967201 | orchestrator | Thursday 19 March 2026 04:59:08 +0000 (0:00:00.526) 0:23:02.217 ******** 2026-03-19 04:59:20.519481 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519580 | orchestrator | 2026-03-19 04:59:20.519592 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 04:59:20.519601 | orchestrator | Thursday 19 March 2026 04:59:09 +0000 (0:00:00.564) 0:23:02.781 ******** 2026-03-19 04:59:20.519609 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519616 | orchestrator | 2026-03-19 04:59:20.519623 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 04:59:20.519631 | orchestrator | Thursday 19 March 2026 04:59:10 +0000 (0:00:00.553) 0:23:03.335 ******** 2026-03-19 04:59:20.519638 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.519647 | orchestrator | 2026-03-19 04:59:20.519654 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 04:59:20.519661 | orchestrator | Thursday 19 March 2026 04:59:10 +0000 (0:00:00.139) 0:23:03.474 ******** 2026-03-19 04:59:20.519669 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.519676 | orchestrator | 2026-03-19 04:59:20.519683 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 04:59:20.519704 | orchestrator | Thursday 19 March 2026 04:59:10 +0000 (0:00:00.137) 0:23:03.612 ******** 2026-03-19 04:59:20.519758 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.519766 | orchestrator | 2026-03-19 04:59:20.519774 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 04:59:20.519781 | orchestrator | Thursday 19 March 2026 04:59:10 +0000 (0:00:00.132) 0:23:03.745 ******** 2026-03-19 04:59:20.519789 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519796 | orchestrator | 2026-03-19 04:59:20.519803 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 04:59:20.519811 | orchestrator | Thursday 19 March 2026 04:59:11 +0000 (0:00:00.553) 0:23:04.299 ******** 2026-03-19 04:59:20.519818 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519825 | orchestrator | 2026-03-19 04:59:20.519833 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 04:59:20.519840 | orchestrator | Thursday 19 March 2026 04:59:11 +0000 (0:00:00.585) 0:23:04.884 ******** 2026-03-19 04:59:20.519864 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.519871 | orchestrator | 2026-03-19 04:59:20.519879 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 04:59:20.519886 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.425) 0:23:05.309 ******** 2026-03-19 04:59:20.519893 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.519900 | orchestrator | 2026-03-19 04:59:20.519908 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 04:59:20.519915 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.112) 0:23:05.422 ******** 2026-03-19 04:59:20.519922 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519929 | orchestrator | 2026-03-19 04:59:20.519936 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 04:59:20.519943 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.169) 0:23:05.592 ******** 2026-03-19 04:59:20.519951 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519958 | orchestrator | 2026-03-19 04:59:20.519965 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 04:59:20.519973 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.162) 0:23:05.754 ******** 2026-03-19 04:59:20.519980 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.519987 | orchestrator | 2026-03-19 04:59:20.519994 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 04:59:20.520001 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.169) 0:23:05.923 ******** 2026-03-19 04:59:20.520009 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520016 | orchestrator | 2026-03-19 04:59:20.520023 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 04:59:20.520030 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.138) 0:23:06.062 ******** 2026-03-19 04:59:20.520039 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520047 | orchestrator | 2026-03-19 04:59:20.520055 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 04:59:20.520063 | orchestrator | Thursday 19 March 2026 04:59:12 +0000 (0:00:00.163) 0:23:06.225 ******** 2026-03-19 04:59:20.520071 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520079 | orchestrator | 2026-03-19 04:59:20.520087 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 04:59:20.520095 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.154) 0:23:06.379 ******** 2026-03-19 04:59:20.520103 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.520111 | orchestrator | 2026-03-19 04:59:20.520119 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 04:59:20.520127 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.161) 0:23:06.541 ******** 2026-03-19 04:59:20.520135 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.520143 | orchestrator | 2026-03-19 04:59:20.520150 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 04:59:20.520157 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.225) 0:23:06.766 ******** 2026-03-19 04:59:20.520164 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520172 | orchestrator | 2026-03-19 04:59:20.520179 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 04:59:20.520187 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.134) 0:23:06.901 ******** 2026-03-19 04:59:20.520194 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520201 | orchestrator | 2026-03-19 04:59:20.520208 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 04:59:20.520215 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.132) 0:23:07.033 ******** 2026-03-19 04:59:20.520222 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520230 | orchestrator | 2026-03-19 04:59:20.520237 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 04:59:20.520244 | orchestrator | Thursday 19 March 2026 04:59:13 +0000 (0:00:00.133) 0:23:07.167 ******** 2026-03-19 04:59:20.520254 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520273 | orchestrator | 2026-03-19 04:59:20.520286 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 04:59:20.520316 | orchestrator | Thursday 19 March 2026 04:59:14 +0000 (0:00:00.458) 0:23:07.626 ******** 2026-03-19 04:59:20.520325 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520332 | orchestrator | 2026-03-19 04:59:20.520339 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 04:59:20.520347 | orchestrator | Thursday 19 March 2026 04:59:14 +0000 (0:00:00.134) 0:23:07.760 ******** 2026-03-19 04:59:20.520354 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520361 | orchestrator | 2026-03-19 04:59:20.520368 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 04:59:20.520375 | orchestrator | Thursday 19 March 2026 04:59:14 +0000 (0:00:00.133) 0:23:07.894 ******** 2026-03-19 04:59:20.520383 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520390 | orchestrator | 2026-03-19 04:59:20.520397 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 04:59:20.520405 | orchestrator | Thursday 19 March 2026 04:59:14 +0000 (0:00:00.121) 0:23:08.015 ******** 2026-03-19 04:59:20.520417 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520425 | orchestrator | 2026-03-19 04:59:20.520432 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 04:59:20.520439 | orchestrator | Thursday 19 March 2026 04:59:14 +0000 (0:00:00.115) 0:23:08.131 ******** 2026-03-19 04:59:20.520446 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520453 | orchestrator | 2026-03-19 04:59:20.520460 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 04:59:20.520468 | orchestrator | Thursday 19 March 2026 04:59:15 +0000 (0:00:00.132) 0:23:08.263 ******** 2026-03-19 04:59:20.520475 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520482 | orchestrator | 2026-03-19 04:59:20.520489 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 04:59:20.520496 | orchestrator | Thursday 19 March 2026 04:59:15 +0000 (0:00:00.122) 0:23:08.386 ******** 2026-03-19 04:59:20.520503 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520511 | orchestrator | 2026-03-19 04:59:20.520518 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 04:59:20.520525 | orchestrator | Thursday 19 March 2026 04:59:15 +0000 (0:00:00.134) 0:23:08.520 ******** 2026-03-19 04:59:20.520532 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520539 | orchestrator | 2026-03-19 04:59:20.520546 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 04:59:20.520553 | orchestrator | Thursday 19 March 2026 04:59:15 +0000 (0:00:00.205) 0:23:08.726 ******** 2026-03-19 04:59:20.520560 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.520567 | orchestrator | 2026-03-19 04:59:20.520575 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 04:59:20.520582 | orchestrator | Thursday 19 March 2026 04:59:16 +0000 (0:00:00.947) 0:23:09.674 ******** 2026-03-19 04:59:20.520589 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.520596 | orchestrator | 2026-03-19 04:59:20.520603 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 04:59:20.520610 | orchestrator | Thursday 19 March 2026 04:59:17 +0000 (0:00:01.248) 0:23:10.922 ******** 2026-03-19 04:59:20.520618 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-19 04:59:20.520626 | orchestrator | 2026-03-19 04:59:20.520633 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 04:59:20.520640 | orchestrator | Thursday 19 March 2026 04:59:18 +0000 (0:00:00.523) 0:23:11.446 ******** 2026-03-19 04:59:20.520647 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520654 | orchestrator | 2026-03-19 04:59:20.520661 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 04:59:20.520668 | orchestrator | Thursday 19 March 2026 04:59:18 +0000 (0:00:00.147) 0:23:11.593 ******** 2026-03-19 04:59:20.520681 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520688 | orchestrator | 2026-03-19 04:59:20.520695 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 04:59:20.520702 | orchestrator | Thursday 19 March 2026 04:59:18 +0000 (0:00:00.152) 0:23:11.746 ******** 2026-03-19 04:59:20.520727 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 04:59:20.520735 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 04:59:20.520742 | orchestrator | 2026-03-19 04:59:20.520749 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 04:59:20.520756 | orchestrator | Thursday 19 March 2026 04:59:19 +0000 (0:00:00.873) 0:23:12.620 ******** 2026-03-19 04:59:20.520764 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:20.520771 | orchestrator | 2026-03-19 04:59:20.520778 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 04:59:20.520785 | orchestrator | Thursday 19 March 2026 04:59:19 +0000 (0:00:00.486) 0:23:13.106 ******** 2026-03-19 04:59:20.520792 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520799 | orchestrator | 2026-03-19 04:59:20.520806 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 04:59:20.520814 | orchestrator | Thursday 19 March 2026 04:59:19 +0000 (0:00:00.150) 0:23:13.256 ******** 2026-03-19 04:59:20.520821 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520828 | orchestrator | 2026-03-19 04:59:20.520835 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 04:59:20.520842 | orchestrator | Thursday 19 March 2026 04:59:20 +0000 (0:00:00.150) 0:23:13.406 ******** 2026-03-19 04:59:20.520849 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:20.520856 | orchestrator | 2026-03-19 04:59:20.520863 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 04:59:20.520871 | orchestrator | Thursday 19 March 2026 04:59:20 +0000 (0:00:00.150) 0:23:13.557 ******** 2026-03-19 04:59:20.520878 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-19 04:59:20.520885 | orchestrator | 2026-03-19 04:59:20.520892 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 04:59:20.520904 | orchestrator | Thursday 19 March 2026 04:59:20 +0000 (0:00:00.214) 0:23:13.771 ******** 2026-03-19 04:59:35.360272 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:35.360392 | orchestrator | 2026-03-19 04:59:35.360408 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 04:59:35.360420 | orchestrator | Thursday 19 March 2026 04:59:21 +0000 (0:00:00.680) 0:23:14.452 ******** 2026-03-19 04:59:35.360433 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 04:59:35.360444 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 04:59:35.360456 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 04:59:35.360467 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360479 | orchestrator | 2026-03-19 04:59:35.360490 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 04:59:35.360501 | orchestrator | Thursday 19 March 2026 04:59:21 +0000 (0:00:00.149) 0:23:14.601 ******** 2026-03-19 04:59:35.360529 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360540 | orchestrator | 2026-03-19 04:59:35.360552 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 04:59:35.360566 | orchestrator | Thursday 19 March 2026 04:59:21 +0000 (0:00:00.151) 0:23:14.753 ******** 2026-03-19 04:59:35.360585 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360605 | orchestrator | 2026-03-19 04:59:35.360623 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 04:59:35.360642 | orchestrator | Thursday 19 March 2026 04:59:21 +0000 (0:00:00.450) 0:23:15.204 ******** 2026-03-19 04:59:35.360688 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360740 | orchestrator | 2026-03-19 04:59:35.360760 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 04:59:35.360779 | orchestrator | Thursday 19 March 2026 04:59:22 +0000 (0:00:00.163) 0:23:15.368 ******** 2026-03-19 04:59:35.360798 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360816 | orchestrator | 2026-03-19 04:59:35.360834 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 04:59:35.360853 | orchestrator | Thursday 19 March 2026 04:59:22 +0000 (0:00:00.148) 0:23:15.516 ******** 2026-03-19 04:59:35.360872 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.360892 | orchestrator | 2026-03-19 04:59:35.360911 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 04:59:35.360930 | orchestrator | Thursday 19 March 2026 04:59:22 +0000 (0:00:00.159) 0:23:15.676 ******** 2026-03-19 04:59:35.360949 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:35.360962 | orchestrator | 2026-03-19 04:59:35.360976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 04:59:35.360988 | orchestrator | Thursday 19 March 2026 04:59:23 +0000 (0:00:01.535) 0:23:17.212 ******** 2026-03-19 04:59:35.361001 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:35.361013 | orchestrator | 2026-03-19 04:59:35.361026 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 04:59:35.361038 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.145) 0:23:17.357 ******** 2026-03-19 04:59:35.361051 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-19 04:59:35.361069 | orchestrator | 2026-03-19 04:59:35.361097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 04:59:35.361118 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.256) 0:23:17.614 ******** 2026-03-19 04:59:35.361135 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361153 | orchestrator | 2026-03-19 04:59:35.361170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 04:59:35.361188 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.153) 0:23:17.767 ******** 2026-03-19 04:59:35.361206 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361226 | orchestrator | 2026-03-19 04:59:35.361246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 04:59:35.361264 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.145) 0:23:17.913 ******** 2026-03-19 04:59:35.361281 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361292 | orchestrator | 2026-03-19 04:59:35.361303 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 04:59:35.361316 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.173) 0:23:18.086 ******** 2026-03-19 04:59:35.361334 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361351 | orchestrator | 2026-03-19 04:59:35.361369 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 04:59:35.361387 | orchestrator | Thursday 19 March 2026 04:59:24 +0000 (0:00:00.167) 0:23:18.253 ******** 2026-03-19 04:59:35.361405 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361423 | orchestrator | 2026-03-19 04:59:35.361441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 04:59:35.361458 | orchestrator | Thursday 19 March 2026 04:59:25 +0000 (0:00:00.146) 0:23:18.400 ******** 2026-03-19 04:59:35.361477 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361496 | orchestrator | 2026-03-19 04:59:35.361515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 04:59:35.361533 | orchestrator | Thursday 19 March 2026 04:59:25 +0000 (0:00:00.456) 0:23:18.856 ******** 2026-03-19 04:59:35.361547 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361558 | orchestrator | 2026-03-19 04:59:35.361569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 04:59:35.361603 | orchestrator | Thursday 19 March 2026 04:59:25 +0000 (0:00:00.156) 0:23:19.013 ******** 2026-03-19 04:59:35.361620 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.361639 | orchestrator | 2026-03-19 04:59:35.361657 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 04:59:35.361676 | orchestrator | Thursday 19 March 2026 04:59:25 +0000 (0:00:00.159) 0:23:19.172 ******** 2026-03-19 04:59:35.361694 | orchestrator | ok: [testbed-node-5] 2026-03-19 04:59:35.361747 | orchestrator | 2026-03-19 04:59:35.361773 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 04:59:35.361818 | orchestrator | Thursday 19 March 2026 04:59:26 +0000 (0:00:00.245) 0:23:19.417 ******** 2026-03-19 04:59:35.361838 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-19 04:59:35.361859 | orchestrator | 2026-03-19 04:59:35.361876 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 04:59:35.361893 | orchestrator | Thursday 19 March 2026 04:59:26 +0000 (0:00:00.197) 0:23:19.615 ******** 2026-03-19 04:59:35.361910 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-19 04:59:35.361927 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-19 04:59:35.361945 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-19 04:59:35.361964 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-19 04:59:35.361983 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-19 04:59:35.362001 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-19 04:59:35.362117 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-19 04:59:35.362141 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-19 04:59:35.362159 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 04:59:35.362176 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 04:59:35.362195 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 04:59:35.362214 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 04:59:35.362229 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 04:59:35.362240 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 04:59:35.362251 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-19 04:59:35.362262 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-19 04:59:35.362272 | orchestrator | 2026-03-19 04:59:35.362283 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 04:59:35.362294 | orchestrator | Thursday 19 March 2026 04:59:32 +0000 (0:00:05.704) 0:23:25.320 ******** 2026-03-19 04:59:35.362305 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-19 04:59:35.362316 | orchestrator | 2026-03-19 04:59:35.362327 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 04:59:35.362338 | orchestrator | Thursday 19 March 2026 04:59:32 +0000 (0:00:00.236) 0:23:25.556 ******** 2026-03-19 04:59:35.362349 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 04:59:35.362361 | orchestrator | 2026-03-19 04:59:35.362372 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 04:59:35.362383 | orchestrator | Thursday 19 March 2026 04:59:32 +0000 (0:00:00.530) 0:23:26.087 ******** 2026-03-19 04:59:35.362394 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 04:59:35.362405 | orchestrator | 2026-03-19 04:59:35.362419 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 04:59:35.362438 | orchestrator | Thursday 19 March 2026 04:59:33 +0000 (0:00:00.985) 0:23:27.072 ******** 2026-03-19 04:59:35.362455 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362487 | orchestrator | 2026-03-19 04:59:35.362505 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 04:59:35.362523 | orchestrator | Thursday 19 March 2026 04:59:33 +0000 (0:00:00.137) 0:23:27.210 ******** 2026-03-19 04:59:35.362543 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362555 | orchestrator | 2026-03-19 04:59:35.362566 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 04:59:35.362577 | orchestrator | Thursday 19 March 2026 04:59:34 +0000 (0:00:00.136) 0:23:27.346 ******** 2026-03-19 04:59:35.362588 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362598 | orchestrator | 2026-03-19 04:59:35.362609 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 04:59:35.362620 | orchestrator | Thursday 19 March 2026 04:59:34 +0000 (0:00:00.414) 0:23:27.761 ******** 2026-03-19 04:59:35.362631 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362642 | orchestrator | 2026-03-19 04:59:35.362652 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 04:59:35.362663 | orchestrator | Thursday 19 March 2026 04:59:34 +0000 (0:00:00.135) 0:23:27.897 ******** 2026-03-19 04:59:35.362674 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362685 | orchestrator | 2026-03-19 04:59:35.362696 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 04:59:35.362706 | orchestrator | Thursday 19 March 2026 04:59:34 +0000 (0:00:00.147) 0:23:28.044 ******** 2026-03-19 04:59:35.362743 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362754 | orchestrator | 2026-03-19 04:59:35.362765 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 04:59:35.362776 | orchestrator | Thursday 19 March 2026 04:59:34 +0000 (0:00:00.141) 0:23:28.186 ******** 2026-03-19 04:59:35.362787 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362798 | orchestrator | 2026-03-19 04:59:35.362808 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 04:59:35.362819 | orchestrator | Thursday 19 March 2026 04:59:35 +0000 (0:00:00.133) 0:23:28.319 ******** 2026-03-19 04:59:35.362829 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362840 | orchestrator | 2026-03-19 04:59:35.362851 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 04:59:35.362862 | orchestrator | Thursday 19 March 2026 04:59:35 +0000 (0:00:00.138) 0:23:28.458 ******** 2026-03-19 04:59:35.362873 | orchestrator | skipping: [testbed-node-5] 2026-03-19 04:59:35.362883 | orchestrator | 2026-03-19 04:59:35.362907 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 05:00:00.261889 | orchestrator | Thursday 19 March 2026 04:59:35 +0000 (0:00:00.152) 0:23:28.611 ******** 2026-03-19 05:00:00.262005 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262067 | orchestrator | 2026-03-19 05:00:00.262078 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 05:00:00.262086 | orchestrator | Thursday 19 March 2026 04:59:35 +0000 (0:00:00.133) 0:23:28.744 ******** 2026-03-19 05:00:00.262095 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262103 | orchestrator | 2026-03-19 05:00:00.262112 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 05:00:00.262120 | orchestrator | Thursday 19 March 2026 04:59:35 +0000 (0:00:00.152) 0:23:28.896 ******** 2026-03-19 05:00:00.262129 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:00:00.262137 | orchestrator | 2026-03-19 05:00:00.262157 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 05:00:00.262166 | orchestrator | Thursday 19 March 2026 04:59:39 +0000 (0:00:03.505) 0:23:32.401 ******** 2026-03-19 05:00:00.262176 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:00:00.262185 | orchestrator | 2026-03-19 05:00:00.262193 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 05:00:00.262222 | orchestrator | Thursday 19 March 2026 04:59:39 +0000 (0:00:00.182) 0:23:32.583 ******** 2026-03-19 05:00:00.262233 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-19 05:00:00.262244 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-19 05:00:00.262254 | orchestrator | 2026-03-19 05:00:00.262262 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 05:00:00.262270 | orchestrator | Thursday 19 March 2026 04:59:43 +0000 (0:00:04.078) 0:23:36.661 ******** 2026-03-19 05:00:00.262278 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262286 | orchestrator | 2026-03-19 05:00:00.262294 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 05:00:00.262302 | orchestrator | Thursday 19 March 2026 04:59:43 +0000 (0:00:00.128) 0:23:36.790 ******** 2026-03-19 05:00:00.262310 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262318 | orchestrator | 2026-03-19 05:00:00.262326 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:00:00.262334 | orchestrator | Thursday 19 March 2026 04:59:43 +0000 (0:00:00.416) 0:23:37.206 ******** 2026-03-19 05:00:00.262342 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262353 | orchestrator | 2026-03-19 05:00:00.262362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:00:00.262371 | orchestrator | Thursday 19 March 2026 04:59:44 +0000 (0:00:00.173) 0:23:37.379 ******** 2026-03-19 05:00:00.262381 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262391 | orchestrator | 2026-03-19 05:00:00.262400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:00:00.262410 | orchestrator | Thursday 19 March 2026 04:59:44 +0000 (0:00:00.175) 0:23:37.555 ******** 2026-03-19 05:00:00.262423 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262437 | orchestrator | 2026-03-19 05:00:00.262451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:00:00.262464 | orchestrator | Thursday 19 March 2026 04:59:44 +0000 (0:00:00.161) 0:23:37.717 ******** 2026-03-19 05:00:00.262478 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.262491 | orchestrator | 2026-03-19 05:00:00.262505 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:00:00.262519 | orchestrator | Thursday 19 March 2026 04:59:44 +0000 (0:00:00.239) 0:23:37.956 ******** 2026-03-19 05:00:00.262534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:00:00.262548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:00:00.262562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:00:00.262575 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262587 | orchestrator | 2026-03-19 05:00:00.262596 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:00:00.262604 | orchestrator | Thursday 19 March 2026 04:59:45 +0000 (0:00:00.426) 0:23:38.382 ******** 2026-03-19 05:00:00.262612 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:00:00.262620 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:00:00.262627 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:00:00.262635 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262651 | orchestrator | 2026-03-19 05:00:00.262659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:00:00.262667 | orchestrator | Thursday 19 March 2026 04:59:45 +0000 (0:00:00.453) 0:23:38.836 ******** 2026-03-19 05:00:00.262675 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:00:00.262683 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:00:00.262691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:00:00.262732 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262745 | orchestrator | 2026-03-19 05:00:00.262754 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:00:00.262762 | orchestrator | Thursday 19 March 2026 04:59:45 +0000 (0:00:00.417) 0:23:39.254 ******** 2026-03-19 05:00:00.262770 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.262778 | orchestrator | 2026-03-19 05:00:00.262786 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:00:00.262794 | orchestrator | Thursday 19 March 2026 04:59:46 +0000 (0:00:00.178) 0:23:39.432 ******** 2026-03-19 05:00:00.262802 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 05:00:00.262810 | orchestrator | 2026-03-19 05:00:00.262818 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 05:00:00.262825 | orchestrator | Thursday 19 March 2026 04:59:46 +0000 (0:00:00.395) 0:23:39.827 ******** 2026-03-19 05:00:00.262839 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.262847 | orchestrator | 2026-03-19 05:00:00.262855 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-19 05:00:00.262863 | orchestrator | Thursday 19 March 2026 04:59:47 +0000 (0:00:00.814) 0:23:40.642 ******** 2026-03-19 05:00:00.262871 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.262879 | orchestrator | 2026-03-19 05:00:00.262887 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-19 05:00:00.262896 | orchestrator | Thursday 19 March 2026 04:59:47 +0000 (0:00:00.441) 0:23:41.083 ******** 2026-03-19 05:00:00.262910 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-03-19 05:00:00.262924 | orchestrator | 2026-03-19 05:00:00.262937 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-19 05:00:00.262951 | orchestrator | Thursday 19 March 2026 04:59:48 +0000 (0:00:00.604) 0:23:41.688 ******** 2026-03-19 05:00:00.262963 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 05:00:00.262972 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-19 05:00:00.262979 | orchestrator | 2026-03-19 05:00:00.262987 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-19 05:00:00.262995 | orchestrator | Thursday 19 March 2026 04:59:49 +0000 (0:00:00.865) 0:23:42.553 ******** 2026-03-19 05:00:00.263003 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:00:00.263011 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 05:00:00.263019 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:00:00.263027 | orchestrator | 2026-03-19 05:00:00.263035 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:00:00.263043 | orchestrator | Thursday 19 March 2026 04:59:51 +0000 (0:00:02.375) 0:23:44.929 ******** 2026-03-19 05:00:00.263051 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-19 05:00:00.263059 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 05:00:00.263067 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263075 | orchestrator | 2026-03-19 05:00:00.263083 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-19 05:00:00.263090 | orchestrator | Thursday 19 March 2026 04:59:52 +0000 (0:00:00.989) 0:23:45.919 ******** 2026-03-19 05:00:00.263098 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263106 | orchestrator | 2026-03-19 05:00:00.263114 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-19 05:00:00.263128 | orchestrator | Thursday 19 March 2026 04:59:53 +0000 (0:00:00.528) 0:23:46.447 ******** 2026-03-19 05:00:00.263136 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:00.263144 | orchestrator | 2026-03-19 05:00:00.263152 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-19 05:00:00.263160 | orchestrator | Thursday 19 March 2026 04:59:53 +0000 (0:00:00.125) 0:23:46.573 ******** 2026-03-19 05:00:00.263168 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-03-19 05:00:00.263177 | orchestrator | 2026-03-19 05:00:00.263185 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-19 05:00:00.263193 | orchestrator | Thursday 19 March 2026 04:59:53 +0000 (0:00:00.587) 0:23:47.161 ******** 2026-03-19 05:00:00.263200 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-03-19 05:00:00.263208 | orchestrator | 2026-03-19 05:00:00.263216 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-19 05:00:00.263224 | orchestrator | Thursday 19 March 2026 04:59:54 +0000 (0:00:00.570) 0:23:47.732 ******** 2026-03-19 05:00:00.263232 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263240 | orchestrator | 2026-03-19 05:00:00.263248 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-19 05:00:00.263256 | orchestrator | Thursday 19 March 2026 04:59:55 +0000 (0:00:01.036) 0:23:48.768 ******** 2026-03-19 05:00:00.263264 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263272 | orchestrator | 2026-03-19 05:00:00.263280 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-19 05:00:00.263288 | orchestrator | Thursday 19 March 2026 04:59:56 +0000 (0:00:01.253) 0:23:50.022 ******** 2026-03-19 05:00:00.263296 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263304 | orchestrator | 2026-03-19 05:00:00.263312 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-19 05:00:00.263320 | orchestrator | Thursday 19 March 2026 04:59:58 +0000 (0:00:01.289) 0:23:51.311 ******** 2026-03-19 05:00:00.263327 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263335 | orchestrator | 2026-03-19 05:00:00.263343 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-19 05:00:00.263351 | orchestrator | Thursday 19 March 2026 04:59:59 +0000 (0:00:01.288) 0:23:52.600 ******** 2026-03-19 05:00:00.263359 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:00.263367 | orchestrator | 2026-03-19 05:00:00.263375 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-19 05:00:00.263383 | orchestrator | Thursday 19 March 2026 05:00:00 +0000 (0:00:00.743) 0:23:53.343 ******** 2026-03-19 05:00:00.263398 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:00:12.858050 | orchestrator | 2026-03-19 05:00:12.858179 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-19 05:00:12.858193 | orchestrator | Thursday 19 March 2026 05:00:00 +0000 (0:00:00.167) 0:23:53.510 ******** 2026-03-19 05:00:12.858197 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:00:12.858202 | orchestrator | 2026-03-19 05:00:12.858206 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-19 05:00:12.858210 | orchestrator | 2026-03-19 05:00:12.858214 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:00:12.858218 | orchestrator | Thursday 19 March 2026 05:00:04 +0000 (0:00:04.540) 0:23:58.051 ******** 2026-03-19 05:00:12.858223 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:12.858228 | orchestrator | 2026-03-19 05:00:12.858243 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 05:00:12.858248 | orchestrator | Thursday 19 March 2026 05:00:05 +0000 (0:00:00.399) 0:23:58.451 ******** 2026-03-19 05:00:12.858251 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858255 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858259 | orchestrator | 2026-03-19 05:00:12.858263 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 05:00:12.858280 | orchestrator | Thursday 19 March 2026 05:00:06 +0000 (0:00:00.918) 0:23:59.369 ******** 2026-03-19 05:00:12.858284 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858288 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858291 | orchestrator | 2026-03-19 05:00:12.858295 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:00:12.858299 | orchestrator | Thursday 19 March 2026 05:00:06 +0000 (0:00:00.255) 0:23:59.624 ******** 2026-03-19 05:00:12.858303 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858307 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858310 | orchestrator | 2026-03-19 05:00:12.858314 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:00:12.858318 | orchestrator | Thursday 19 March 2026 05:00:06 +0000 (0:00:00.584) 0:24:00.209 ******** 2026-03-19 05:00:12.858322 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858325 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858329 | orchestrator | 2026-03-19 05:00:12.858333 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 05:00:12.858337 | orchestrator | Thursday 19 March 2026 05:00:07 +0000 (0:00:00.258) 0:24:00.468 ******** 2026-03-19 05:00:12.858340 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858344 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858348 | orchestrator | 2026-03-19 05:00:12.858352 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 05:00:12.858355 | orchestrator | Thursday 19 March 2026 05:00:07 +0000 (0:00:00.258) 0:24:00.726 ******** 2026-03-19 05:00:12.858359 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858363 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858367 | orchestrator | 2026-03-19 05:00:12.858371 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 05:00:12.858375 | orchestrator | Thursday 19 March 2026 05:00:07 +0000 (0:00:00.276) 0:24:01.002 ******** 2026-03-19 05:00:12.858379 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:12.858384 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:12.858387 | orchestrator | 2026-03-19 05:00:12.858391 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 05:00:12.858395 | orchestrator | Thursday 19 March 2026 05:00:08 +0000 (0:00:00.559) 0:24:01.562 ******** 2026-03-19 05:00:12.858399 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858403 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858406 | orchestrator | 2026-03-19 05:00:12.858410 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 05:00:12.858414 | orchestrator | Thursday 19 March 2026 05:00:08 +0000 (0:00:00.218) 0:24:01.781 ******** 2026-03-19 05:00:12.858418 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:00:12.858422 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:00:12.858426 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:00:12.858429 | orchestrator | 2026-03-19 05:00:12.858433 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 05:00:12.858437 | orchestrator | Thursday 19 March 2026 05:00:09 +0000 (0:00:00.690) 0:24:02.471 ******** 2026-03-19 05:00:12.858440 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:12.858444 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:12.858448 | orchestrator | 2026-03-19 05:00:12.858452 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 05:00:12.858455 | orchestrator | Thursday 19 March 2026 05:00:09 +0000 (0:00:00.362) 0:24:02.834 ******** 2026-03-19 05:00:12.858459 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:00:12.858463 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:00:12.858467 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:00:12.858474 | orchestrator | 2026-03-19 05:00:12.858478 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 05:00:12.858481 | orchestrator | Thursday 19 March 2026 05:00:11 +0000 (0:00:01.768) 0:24:04.603 ******** 2026-03-19 05:00:12.858485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 05:00:12.858490 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 05:00:12.858493 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 05:00:12.858497 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:12.858501 | orchestrator | 2026-03-19 05:00:12.858505 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 05:00:12.858508 | orchestrator | Thursday 19 March 2026 05:00:11 +0000 (0:00:00.497) 0:24:05.100 ******** 2026-03-19 05:00:12.858524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858541 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:12.858545 | orchestrator | 2026-03-19 05:00:12.858549 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 05:00:12.858553 | orchestrator | Thursday 19 March 2026 05:00:12 +0000 (0:00:00.636) 0:24:05.737 ******** 2026-03-19 05:00:12.858558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858569 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:12.858573 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:12.858577 | orchestrator | 2026-03-19 05:00:12.858580 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 05:00:12.858584 | orchestrator | Thursday 19 March 2026 05:00:12 +0000 (0:00:00.173) 0:24:05.910 ******** 2026-03-19 05:00:12.858590 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 05:00:10.079014', 'end': '2026-03-19 05:00:10.122803', 'delta': '0:00:00.043789', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 05:00:12.858599 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 05:00:10.629540', 'end': '2026-03-19 05:00:10.670245', 'delta': '0:00:00.040705', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 05:00:12.858607 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 05:00:11.165923', 'end': '2026-03-19 05:00:11.200926', 'delta': '0:00:00.035003', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 05:00:18.255880 | orchestrator | 2026-03-19 05:00:18.255987 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 05:00:18.256018 | orchestrator | Thursday 19 March 2026 05:00:12 +0000 (0:00:00.198) 0:24:06.109 ******** 2026-03-19 05:00:18.256029 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256040 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256050 | orchestrator | 2026-03-19 05:00:18.256060 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 05:00:18.256070 | orchestrator | Thursday 19 March 2026 05:00:13 +0000 (0:00:00.411) 0:24:06.521 ******** 2026-03-19 05:00:18.256080 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256091 | orchestrator | 2026-03-19 05:00:18.256101 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 05:00:18.256111 | orchestrator | Thursday 19 March 2026 05:00:13 +0000 (0:00:00.241) 0:24:06.763 ******** 2026-03-19 05:00:18.256121 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256130 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256140 | orchestrator | 2026-03-19 05:00:18.256150 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 05:00:18.256160 | orchestrator | Thursday 19 March 2026 05:00:14 +0000 (0:00:00.578) 0:24:07.341 ******** 2026-03-19 05:00:18.256169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:00:18.256180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:00:18.256190 | orchestrator | 2026-03-19 05:00:18.256199 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:00:18.256209 | orchestrator | Thursday 19 March 2026 05:00:15 +0000 (0:00:01.323) 0:24:08.665 ******** 2026-03-19 05:00:18.256219 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256228 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256238 | orchestrator | 2026-03-19 05:00:18.256248 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 05:00:18.256258 | orchestrator | Thursday 19 March 2026 05:00:15 +0000 (0:00:00.250) 0:24:08.916 ******** 2026-03-19 05:00:18.256267 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256296 | orchestrator | 2026-03-19 05:00:18.256307 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 05:00:18.256317 | orchestrator | Thursday 19 March 2026 05:00:15 +0000 (0:00:00.159) 0:24:09.075 ******** 2026-03-19 05:00:18.256326 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256336 | orchestrator | 2026-03-19 05:00:18.256346 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:00:18.256355 | orchestrator | Thursday 19 March 2026 05:00:16 +0000 (0:00:00.226) 0:24:09.302 ******** 2026-03-19 05:00:18.256367 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256378 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:18.256390 | orchestrator | 2026-03-19 05:00:18.256401 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 05:00:18.256412 | orchestrator | Thursday 19 March 2026 05:00:16 +0000 (0:00:00.229) 0:24:09.532 ******** 2026-03-19 05:00:18.256423 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256435 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:18.256462 | orchestrator | 2026-03-19 05:00:18.256484 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 05:00:18.256495 | orchestrator | Thursday 19 March 2026 05:00:16 +0000 (0:00:00.201) 0:24:09.733 ******** 2026-03-19 05:00:18.256506 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256517 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256529 | orchestrator | 2026-03-19 05:00:18.256540 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 05:00:18.256552 | orchestrator | Thursday 19 March 2026 05:00:17 +0000 (0:00:00.571) 0:24:10.305 ******** 2026-03-19 05:00:18.256563 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256575 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:18.256585 | orchestrator | 2026-03-19 05:00:18.256597 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 05:00:18.256608 | orchestrator | Thursday 19 March 2026 05:00:17 +0000 (0:00:00.238) 0:24:10.543 ******** 2026-03-19 05:00:18.256619 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256631 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256642 | orchestrator | 2026-03-19 05:00:18.256653 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 05:00:18.256665 | orchestrator | Thursday 19 March 2026 05:00:17 +0000 (0:00:00.245) 0:24:10.789 ******** 2026-03-19 05:00:18.256676 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.256688 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:18.256699 | orchestrator | 2026-03-19 05:00:18.256782 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 05:00:18.256799 | orchestrator | Thursday 19 March 2026 05:00:17 +0000 (0:00:00.237) 0:24:11.026 ******** 2026-03-19 05:00:18.256808 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:18.256818 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:18.256828 | orchestrator | 2026-03-19 05:00:18.256838 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 05:00:18.256848 | orchestrator | Thursday 19 March 2026 05:00:18 +0000 (0:00:00.256) 0:24:11.282 ******** 2026-03-19 05:00:18.256860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.256899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}})  2026-03-19 05:00:18.256924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:00:18.256936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}})  2026-03-19 05:00:18.256947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.256959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.256969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 05:00:18.256981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.257003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}})  2026-03-19 05:00:18.366680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}})  2026-03-19 05:00:18.366691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:00:18.366858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}})  2026-03-19 05:00:18.366888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:00:18.366909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:00:18.366939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}})  2026-03-19 05:00:18.507434 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:18.507546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 05:00:18.507593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}})  2026-03-19 05:00:18.507705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}})  2026-03-19 05:00:18.507756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:00:18.507799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:00:18.507837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:00:18.742676 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:18.742827 | orchestrator | 2026-03-19 05:00:18.742843 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 05:00:18.742854 | orchestrator | Thursday 19 March 2026 05:00:18 +0000 (0:00:00.478) 0:24:11.760 ******** 2026-03-19 05:00:18.742867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.742882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.742893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.742950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.742984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.742997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.743008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.743018 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.743028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.743051 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.743085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.817709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.817883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.817905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.817961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.817990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.818077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.818103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.818122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:18.818142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008807 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:19.008836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:19.008957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:28.156700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:28.156835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:28.156861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:00:28.156870 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.156879 | orchestrator | 2026-03-19 05:00:28.156886 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 05:00:28.156894 | orchestrator | Thursday 19 March 2026 05:00:19 +0000 (0:00:00.504) 0:24:12.265 ******** 2026-03-19 05:00:28.156900 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:28.156907 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:28.156913 | orchestrator | 2026-03-19 05:00:28.156920 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 05:00:28.156926 | orchestrator | Thursday 19 March 2026 05:00:19 +0000 (0:00:00.936) 0:24:13.201 ******** 2026-03-19 05:00:28.156932 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:28.156938 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:28.156945 | orchestrator | 2026-03-19 05:00:28.156951 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:00:28.156957 | orchestrator | Thursday 19 March 2026 05:00:20 +0000 (0:00:00.213) 0:24:13.415 ******** 2026-03-19 05:00:28.156963 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:28.156969 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:28.156975 | orchestrator | 2026-03-19 05:00:28.156982 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:00:28.156988 | orchestrator | Thursday 19 March 2026 05:00:20 +0000 (0:00:00.602) 0:24:14.017 ******** 2026-03-19 05:00:28.156994 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157001 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157048 | orchestrator | 2026-03-19 05:00:28.157055 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:00:28.157061 | orchestrator | Thursday 19 March 2026 05:00:20 +0000 (0:00:00.231) 0:24:14.248 ******** 2026-03-19 05:00:28.157067 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157074 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157080 | orchestrator | 2026-03-19 05:00:28.157086 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:00:28.157092 | orchestrator | Thursday 19 March 2026 05:00:21 +0000 (0:00:00.341) 0:24:14.589 ******** 2026-03-19 05:00:28.157098 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157104 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157111 | orchestrator | 2026-03-19 05:00:28.157117 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 05:00:28.157123 | orchestrator | Thursday 19 March 2026 05:00:21 +0000 (0:00:00.247) 0:24:14.837 ******** 2026-03-19 05:00:28.157129 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 05:00:28.157136 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 05:00:28.157142 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 05:00:28.157148 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 05:00:28.157154 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 05:00:28.157160 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 05:00:28.157166 | orchestrator | 2026-03-19 05:00:28.157173 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 05:00:28.157179 | orchestrator | Thursday 19 March 2026 05:00:23 +0000 (0:00:01.443) 0:24:16.280 ******** 2026-03-19 05:00:28.157197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 05:00:28.157204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 05:00:28.157211 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 05:00:28.157217 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 05:00:28.157229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 05:00:28.157235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 05:00:28.157241 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157249 | orchestrator | 2026-03-19 05:00:28.157256 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 05:00:28.157264 | orchestrator | Thursday 19 March 2026 05:00:23 +0000 (0:00:00.263) 0:24:16.544 ******** 2026-03-19 05:00:28.157272 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:28.157280 | orchestrator | 2026-03-19 05:00:28.157288 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:00:28.157297 | orchestrator | Thursday 19 March 2026 05:00:23 +0000 (0:00:00.410) 0:24:16.955 ******** 2026-03-19 05:00:28.157304 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157311 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157318 | orchestrator | 2026-03-19 05:00:28.157325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:00:28.157333 | orchestrator | Thursday 19 March 2026 05:00:23 +0000 (0:00:00.265) 0:24:17.220 ******** 2026-03-19 05:00:28.157340 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157348 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157355 | orchestrator | 2026-03-19 05:00:28.157363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:00:28.157370 | orchestrator | Thursday 19 March 2026 05:00:24 +0000 (0:00:00.254) 0:24:17.474 ******** 2026-03-19 05:00:28.157377 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157384 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:28.157390 | orchestrator | 2026-03-19 05:00:28.157405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:00:28.157412 | orchestrator | Thursday 19 March 2026 05:00:24 +0000 (0:00:00.243) 0:24:17.718 ******** 2026-03-19 05:00:28.157418 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:28.157424 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:28.157430 | orchestrator | 2026-03-19 05:00:28.157437 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:00:28.157443 | orchestrator | Thursday 19 March 2026 05:00:25 +0000 (0:00:00.654) 0:24:18.373 ******** 2026-03-19 05:00:28.157449 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:00:28.157455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:00:28.157461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:00:28.157467 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157474 | orchestrator | 2026-03-19 05:00:28.157480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:00:28.157486 | orchestrator | Thursday 19 March 2026 05:00:25 +0000 (0:00:00.413) 0:24:18.786 ******** 2026-03-19 05:00:28.157492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:00:28.157498 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:00:28.157504 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:00:28.157511 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157517 | orchestrator | 2026-03-19 05:00:28.157523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:00:28.157529 | orchestrator | Thursday 19 March 2026 05:00:25 +0000 (0:00:00.406) 0:24:19.193 ******** 2026-03-19 05:00:28.157535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:00:28.157542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:00:28.157548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:00:28.157554 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:28.157560 | orchestrator | 2026-03-19 05:00:28.157566 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:00:28.157573 | orchestrator | Thursday 19 March 2026 05:00:26 +0000 (0:00:00.384) 0:24:19.578 ******** 2026-03-19 05:00:28.157579 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:28.157585 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:28.157591 | orchestrator | 2026-03-19 05:00:28.157597 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:00:28.157604 | orchestrator | Thursday 19 March 2026 05:00:26 +0000 (0:00:00.247) 0:24:19.825 ******** 2026-03-19 05:00:28.157610 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 05:00:28.157616 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 05:00:28.157622 | orchestrator | 2026-03-19 05:00:28.157628 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 05:00:28.157635 | orchestrator | Thursday 19 March 2026 05:00:26 +0000 (0:00:00.433) 0:24:20.259 ******** 2026-03-19 05:00:28.157641 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:00:28.157647 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:00:28.157653 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:00:28.157660 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:00:28.157666 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 05:00:28.157672 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:00:28.157682 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:00:41.404309 | orchestrator | 2026-03-19 05:00:41.404388 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 05:00:41.404414 | orchestrator | Thursday 19 March 2026 05:00:28 +0000 (0:00:01.149) 0:24:21.409 ******** 2026-03-19 05:00:41.404420 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:00:41.404426 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:00:41.404431 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:00:41.404435 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:00:41.404441 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 05:00:41.404446 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:00:41.404451 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:00:41.404455 | orchestrator | 2026-03-19 05:00:41.404460 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-19 05:00:41.404465 | orchestrator | Thursday 19 March 2026 05:00:29 +0000 (0:00:01.699) 0:24:23.108 ******** 2026-03-19 05:00:41.404470 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404475 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404480 | orchestrator | 2026-03-19 05:00:41.404484 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 05:00:41.404489 | orchestrator | Thursday 19 March 2026 05:00:30 +0000 (0:00:00.536) 0:24:23.645 ******** 2026-03-19 05:00:41.404494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:41.404499 | orchestrator | 2026-03-19 05:00:41.404505 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 05:00:41.404514 | orchestrator | Thursday 19 March 2026 05:00:30 +0000 (0:00:00.364) 0:24:24.009 ******** 2026-03-19 05:00:41.404535 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:41.404543 | orchestrator | 2026-03-19 05:00:41.404551 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 05:00:41.404558 | orchestrator | Thursday 19 March 2026 05:00:31 +0000 (0:00:00.372) 0:24:24.382 ******** 2026-03-19 05:00:41.404566 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404574 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404582 | orchestrator | 2026-03-19 05:00:41.404590 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 05:00:41.404597 | orchestrator | Thursday 19 March 2026 05:00:31 +0000 (0:00:00.212) 0:24:24.594 ******** 2026-03-19 05:00:41.404605 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.404612 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.404617 | orchestrator | 2026-03-19 05:00:41.404621 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 05:00:41.404626 | orchestrator | Thursday 19 March 2026 05:00:31 +0000 (0:00:00.614) 0:24:25.209 ******** 2026-03-19 05:00:41.404630 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.404635 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.404639 | orchestrator | 2026-03-19 05:00:41.404644 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 05:00:41.404648 | orchestrator | Thursday 19 March 2026 05:00:32 +0000 (0:00:00.967) 0:24:26.177 ******** 2026-03-19 05:00:41.404653 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.404657 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.404662 | orchestrator | 2026-03-19 05:00:41.404666 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 05:00:41.404671 | orchestrator | Thursday 19 March 2026 05:00:33 +0000 (0:00:00.643) 0:24:26.820 ******** 2026-03-19 05:00:41.404675 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404680 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404684 | orchestrator | 2026-03-19 05:00:41.404689 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 05:00:41.404699 | orchestrator | Thursday 19 March 2026 05:00:33 +0000 (0:00:00.239) 0:24:27.060 ******** 2026-03-19 05:00:41.404703 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404708 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404712 | orchestrator | 2026-03-19 05:00:41.404787 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 05:00:41.404796 | orchestrator | Thursday 19 March 2026 05:00:34 +0000 (0:00:00.232) 0:24:27.293 ******** 2026-03-19 05:00:41.404803 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404810 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404817 | orchestrator | 2026-03-19 05:00:41.404861 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 05:00:41.404869 | orchestrator | Thursday 19 March 2026 05:00:34 +0000 (0:00:00.222) 0:24:27.515 ******** 2026-03-19 05:00:41.404877 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.404886 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.404893 | orchestrator | 2026-03-19 05:00:41.404902 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 05:00:41.404908 | orchestrator | Thursday 19 March 2026 05:00:34 +0000 (0:00:00.694) 0:24:28.209 ******** 2026-03-19 05:00:41.404913 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.404919 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.404924 | orchestrator | 2026-03-19 05:00:41.404929 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 05:00:41.404935 | orchestrator | Thursday 19 March 2026 05:00:35 +0000 (0:00:00.946) 0:24:29.156 ******** 2026-03-19 05:00:41.404940 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404946 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404951 | orchestrator | 2026-03-19 05:00:41.404956 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 05:00:41.404962 | orchestrator | Thursday 19 March 2026 05:00:36 +0000 (0:00:00.238) 0:24:29.394 ******** 2026-03-19 05:00:41.404967 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.404987 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.404992 | orchestrator | 2026-03-19 05:00:41.404997 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 05:00:41.405003 | orchestrator | Thursday 19 March 2026 05:00:36 +0000 (0:00:00.228) 0:24:29.622 ******** 2026-03-19 05:00:41.405008 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.405014 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.405019 | orchestrator | 2026-03-19 05:00:41.405024 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 05:00:41.405029 | orchestrator | Thursday 19 March 2026 05:00:36 +0000 (0:00:00.245) 0:24:29.867 ******** 2026-03-19 05:00:41.405034 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.405040 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.405045 | orchestrator | 2026-03-19 05:00:41.405050 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 05:00:41.405055 | orchestrator | Thursday 19 March 2026 05:00:36 +0000 (0:00:00.261) 0:24:30.129 ******** 2026-03-19 05:00:41.405060 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.405066 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.405071 | orchestrator | 2026-03-19 05:00:41.405076 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 05:00:41.405082 | orchestrator | Thursday 19 March 2026 05:00:37 +0000 (0:00:00.262) 0:24:30.391 ******** 2026-03-19 05:00:41.405087 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405092 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405097 | orchestrator | 2026-03-19 05:00:41.405103 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 05:00:41.405108 | orchestrator | Thursday 19 March 2026 05:00:37 +0000 (0:00:00.228) 0:24:30.619 ******** 2026-03-19 05:00:41.405113 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405119 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405124 | orchestrator | 2026-03-19 05:00:41.405129 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 05:00:41.405141 | orchestrator | Thursday 19 March 2026 05:00:37 +0000 (0:00:00.518) 0:24:31.138 ******** 2026-03-19 05:00:41.405146 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405151 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405157 | orchestrator | 2026-03-19 05:00:41.405162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 05:00:41.405172 | orchestrator | Thursday 19 March 2026 05:00:38 +0000 (0:00:00.232) 0:24:31.370 ******** 2026-03-19 05:00:41.405178 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.405183 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.405188 | orchestrator | 2026-03-19 05:00:41.405193 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 05:00:41.405198 | orchestrator | Thursday 19 March 2026 05:00:38 +0000 (0:00:00.252) 0:24:31.623 ******** 2026-03-19 05:00:41.405204 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:41.405209 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:41.405214 | orchestrator | 2026-03-19 05:00:41.405220 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 05:00:41.405225 | orchestrator | Thursday 19 March 2026 05:00:38 +0000 (0:00:00.369) 0:24:31.992 ******** 2026-03-19 05:00:41.405231 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405236 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405241 | orchestrator | 2026-03-19 05:00:41.405246 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 05:00:41.405250 | orchestrator | Thursday 19 March 2026 05:00:38 +0000 (0:00:00.216) 0:24:32.209 ******** 2026-03-19 05:00:41.405255 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405259 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405264 | orchestrator | 2026-03-19 05:00:41.405268 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 05:00:41.405273 | orchestrator | Thursday 19 March 2026 05:00:39 +0000 (0:00:00.203) 0:24:32.413 ******** 2026-03-19 05:00:41.405277 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405282 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405286 | orchestrator | 2026-03-19 05:00:41.405291 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 05:00:41.405295 | orchestrator | Thursday 19 March 2026 05:00:39 +0000 (0:00:00.526) 0:24:32.939 ******** 2026-03-19 05:00:41.405300 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405304 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405309 | orchestrator | 2026-03-19 05:00:41.405313 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 05:00:41.405318 | orchestrator | Thursday 19 March 2026 05:00:39 +0000 (0:00:00.264) 0:24:33.203 ******** 2026-03-19 05:00:41.405322 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405327 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405331 | orchestrator | 2026-03-19 05:00:41.405336 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 05:00:41.405340 | orchestrator | Thursday 19 March 2026 05:00:40 +0000 (0:00:00.244) 0:24:33.448 ******** 2026-03-19 05:00:41.405345 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405350 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405354 | orchestrator | 2026-03-19 05:00:41.405358 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 05:00:41.405363 | orchestrator | Thursday 19 March 2026 05:00:40 +0000 (0:00:00.260) 0:24:33.709 ******** 2026-03-19 05:00:41.405368 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405372 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405377 | orchestrator | 2026-03-19 05:00:41.405381 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 05:00:41.405386 | orchestrator | Thursday 19 March 2026 05:00:40 +0000 (0:00:00.235) 0:24:33.945 ******** 2026-03-19 05:00:41.405390 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405398 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405403 | orchestrator | 2026-03-19 05:00:41.405407 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 05:00:41.405412 | orchestrator | Thursday 19 March 2026 05:00:40 +0000 (0:00:00.203) 0:24:34.148 ******** 2026-03-19 05:00:41.405416 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:41.405421 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:41.405425 | orchestrator | 2026-03-19 05:00:41.405432 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 05:00:55.973000 | orchestrator | Thursday 19 March 2026 05:00:41 +0000 (0:00:00.504) 0:24:34.653 ******** 2026-03-19 05:00:55.973093 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973103 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973110 | orchestrator | 2026-03-19 05:00:55.973117 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 05:00:55.973124 | orchestrator | Thursday 19 March 2026 05:00:41 +0000 (0:00:00.250) 0:24:34.903 ******** 2026-03-19 05:00:55.973131 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973137 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973143 | orchestrator | 2026-03-19 05:00:55.973150 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 05:00:55.973156 | orchestrator | Thursday 19 March 2026 05:00:41 +0000 (0:00:00.251) 0:24:35.155 ******** 2026-03-19 05:00:55.973162 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973169 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973175 | orchestrator | 2026-03-19 05:00:55.973181 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 05:00:55.973188 | orchestrator | Thursday 19 March 2026 05:00:42 +0000 (0:00:00.376) 0:24:35.532 ******** 2026-03-19 05:00:55.973194 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973201 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973207 | orchestrator | 2026-03-19 05:00:55.973213 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 05:00:55.973220 | orchestrator | Thursday 19 March 2026 05:00:43 +0000 (0:00:01.048) 0:24:36.581 ******** 2026-03-19 05:00:55.973226 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973232 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973238 | orchestrator | 2026-03-19 05:00:55.973245 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 05:00:55.973251 | orchestrator | Thursday 19 March 2026 05:00:44 +0000 (0:00:01.393) 0:24:37.974 ******** 2026-03-19 05:00:55.973257 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:55.973263 | orchestrator | 2026-03-19 05:00:55.973270 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 05:00:55.973291 | orchestrator | Thursday 19 March 2026 05:00:45 +0000 (0:00:00.724) 0:24:38.698 ******** 2026-03-19 05:00:55.973297 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973303 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973309 | orchestrator | 2026-03-19 05:00:55.973316 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 05:00:55.973322 | orchestrator | Thursday 19 March 2026 05:00:45 +0000 (0:00:00.232) 0:24:38.931 ******** 2026-03-19 05:00:55.973328 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973334 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973341 | orchestrator | 2026-03-19 05:00:55.973347 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 05:00:55.973353 | orchestrator | Thursday 19 March 2026 05:00:45 +0000 (0:00:00.237) 0:24:39.169 ******** 2026-03-19 05:00:55.973359 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 05:00:55.973365 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 05:00:55.973372 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 05:00:55.973398 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 05:00:55.973404 | orchestrator | 2026-03-19 05:00:55.973410 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 05:00:55.973416 | orchestrator | Thursday 19 March 2026 05:00:46 +0000 (0:00:00.971) 0:24:40.140 ******** 2026-03-19 05:00:55.973422 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973429 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973436 | orchestrator | 2026-03-19 05:00:55.973442 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 05:00:55.973448 | orchestrator | Thursday 19 March 2026 05:00:47 +0000 (0:00:00.572) 0:24:40.713 ******** 2026-03-19 05:00:55.973454 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973460 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973467 | orchestrator | 2026-03-19 05:00:55.973473 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 05:00:55.973480 | orchestrator | Thursday 19 March 2026 05:00:47 +0000 (0:00:00.223) 0:24:40.936 ******** 2026-03-19 05:00:55.973486 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973492 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973498 | orchestrator | 2026-03-19 05:00:55.973505 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 05:00:55.973511 | orchestrator | Thursday 19 March 2026 05:00:48 +0000 (0:00:00.573) 0:24:41.510 ******** 2026-03-19 05:00:55.973517 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973523 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973530 | orchestrator | 2026-03-19 05:00:55.973536 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 05:00:55.973542 | orchestrator | Thursday 19 March 2026 05:00:48 +0000 (0:00:00.248) 0:24:41.758 ******** 2026-03-19 05:00:55.973548 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:55.973555 | orchestrator | 2026-03-19 05:00:55.973561 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 05:00:55.973567 | orchestrator | Thursday 19 March 2026 05:00:48 +0000 (0:00:00.390) 0:24:42.149 ******** 2026-03-19 05:00:55.973574 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973580 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973586 | orchestrator | 2026-03-19 05:00:55.973593 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 05:00:55.973599 | orchestrator | Thursday 19 March 2026 05:00:49 +0000 (0:00:00.810) 0:24:42.960 ******** 2026-03-19 05:00:55.973606 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 05:00:55.973625 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 05:00:55.973632 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 05:00:55.973638 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973645 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 05:00:55.973651 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 05:00:55.973657 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 05:00:55.973664 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973670 | orchestrator | 2026-03-19 05:00:55.973676 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 05:00:55.973683 | orchestrator | Thursday 19 March 2026 05:00:49 +0000 (0:00:00.266) 0:24:43.226 ******** 2026-03-19 05:00:55.973690 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973696 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973703 | orchestrator | 2026-03-19 05:00:55.973709 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 05:00:55.973715 | orchestrator | Thursday 19 March 2026 05:00:50 +0000 (0:00:00.508) 0:24:43.735 ******** 2026-03-19 05:00:55.973744 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973752 | orchestrator | 2026-03-19 05:00:55.973759 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 05:00:55.973764 | orchestrator | Thursday 19 March 2026 05:00:50 +0000 (0:00:00.165) 0:24:43.900 ******** 2026-03-19 05:00:55.973771 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973777 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973784 | orchestrator | 2026-03-19 05:00:55.973791 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 05:00:55.973797 | orchestrator | Thursday 19 March 2026 05:00:50 +0000 (0:00:00.254) 0:24:44.155 ******** 2026-03-19 05:00:55.973804 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973809 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973813 | orchestrator | 2026-03-19 05:00:55.973818 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 05:00:55.973827 | orchestrator | Thursday 19 March 2026 05:00:51 +0000 (0:00:00.260) 0:24:44.416 ******** 2026-03-19 05:00:55.973833 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973840 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973847 | orchestrator | 2026-03-19 05:00:55.973852 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 05:00:55.973856 | orchestrator | Thursday 19 March 2026 05:00:51 +0000 (0:00:00.253) 0:24:44.669 ******** 2026-03-19 05:00:55.973861 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973866 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973870 | orchestrator | 2026-03-19 05:00:55.973874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 05:00:55.973879 | orchestrator | Thursday 19 March 2026 05:00:52 +0000 (0:00:01.558) 0:24:46.227 ******** 2026-03-19 05:00:55.973883 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:00:55.973887 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:00:55.973892 | orchestrator | 2026-03-19 05:00:55.973896 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 05:00:55.973901 | orchestrator | Thursday 19 March 2026 05:00:53 +0000 (0:00:00.231) 0:24:46.459 ******** 2026-03-19 05:00:55.973906 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3 2026-03-19 05:00:55.973911 | orchestrator | 2026-03-19 05:00:55.973916 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 05:00:55.973920 | orchestrator | Thursday 19 March 2026 05:00:53 +0000 (0:00:00.707) 0:24:47.166 ******** 2026-03-19 05:00:55.973924 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973928 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973932 | orchestrator | 2026-03-19 05:00:55.973936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 05:00:55.973939 | orchestrator | Thursday 19 March 2026 05:00:54 +0000 (0:00:00.246) 0:24:47.413 ******** 2026-03-19 05:00:55.973943 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973947 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973950 | orchestrator | 2026-03-19 05:00:55.973954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 05:00:55.973958 | orchestrator | Thursday 19 March 2026 05:00:54 +0000 (0:00:00.272) 0:24:47.685 ******** 2026-03-19 05:00:55.973961 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973965 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973969 | orchestrator | 2026-03-19 05:00:55.973973 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 05:00:55.973976 | orchestrator | Thursday 19 March 2026 05:00:54 +0000 (0:00:00.253) 0:24:47.939 ******** 2026-03-19 05:00:55.973980 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.973984 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.973987 | orchestrator | 2026-03-19 05:00:55.973991 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 05:00:55.973995 | orchestrator | Thursday 19 March 2026 05:00:54 +0000 (0:00:00.251) 0:24:48.191 ******** 2026-03-19 05:00:55.974002 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.974006 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.974010 | orchestrator | 2026-03-19 05:00:55.974049 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 05:00:55.974053 | orchestrator | Thursday 19 March 2026 05:00:55 +0000 (0:00:00.231) 0:24:48.422 ******** 2026-03-19 05:00:55.974057 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.974061 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.974064 | orchestrator | 2026-03-19 05:00:55.974068 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 05:00:55.974072 | orchestrator | Thursday 19 March 2026 05:00:55 +0000 (0:00:00.556) 0:24:48.978 ******** 2026-03-19 05:00:55.974076 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:00:55.974080 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:00:55.974083 | orchestrator | 2026-03-19 05:00:55.974091 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 05:01:16.636550 | orchestrator | Thursday 19 March 2026 05:00:55 +0000 (0:00:00.247) 0:24:49.226 ******** 2026-03-19 05:01:16.636664 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.636679 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.636692 | orchestrator | 2026-03-19 05:01:16.636705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 05:01:16.636717 | orchestrator | Thursday 19 March 2026 05:00:56 +0000 (0:00:00.239) 0:24:49.466 ******** 2026-03-19 05:01:16.636776 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:16.636790 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:16.636802 | orchestrator | 2026-03-19 05:01:16.636814 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 05:01:16.636826 | orchestrator | Thursday 19 March 2026 05:00:56 +0000 (0:00:00.376) 0:24:49.843 ******** 2026-03-19 05:01:16.636839 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3 2026-03-19 05:01:16.636851 | orchestrator | 2026-03-19 05:01:16.636863 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 05:01:16.636876 | orchestrator | Thursday 19 March 2026 05:00:56 +0000 (0:00:00.357) 0:24:50.200 ******** 2026-03-19 05:01:16.636888 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-19 05:01:16.636899 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-19 05:01:16.636911 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-19 05:01:16.636923 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-19 05:01:16.636934 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-19 05:01:16.636946 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-19 05:01:16.636957 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-19 05:01:16.636969 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-19 05:01:16.636980 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-19 05:01:16.636992 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-19 05:01:16.637003 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-19 05:01:16.637031 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-19 05:01:16.637043 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-19 05:01:16.637055 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-19 05:01:16.637066 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-19 05:01:16.637078 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 05:01:16.637091 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-19 05:01:16.637104 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 05:01:16.637117 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 05:01:16.637130 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 05:01:16.637167 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 05:01:16.637181 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 05:01:16.637194 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 05:01:16.637205 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 05:01:16.637215 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 05:01:16.637226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 05:01:16.637237 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 05:01:16.637247 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-19 05:01:16.637259 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 05:01:16.637270 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-19 05:01:16.637283 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-19 05:01:16.637295 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-19 05:01:16.637308 | orchestrator | 2026-03-19 05:01:16.637321 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 05:01:16.637333 | orchestrator | Thursday 19 March 2026 05:01:03 +0000 (0:00:06.240) 0:24:56.440 ******** 2026-03-19 05:01:16.637346 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3 2026-03-19 05:01:16.637359 | orchestrator | 2026-03-19 05:01:16.637372 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 05:01:16.637385 | orchestrator | Thursday 19 March 2026 05:01:03 +0000 (0:00:00.360) 0:24:56.801 ******** 2026-03-19 05:01:16.637398 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.637413 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.637426 | orchestrator | 2026-03-19 05:01:16.637438 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 05:01:16.637451 | orchestrator | Thursday 19 March 2026 05:01:04 +0000 (0:00:00.634) 0:24:57.436 ******** 2026-03-19 05:01:16.637462 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.637473 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.637485 | orchestrator | 2026-03-19 05:01:16.637497 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 05:01:16.637525 | orchestrator | Thursday 19 March 2026 05:01:05 +0000 (0:00:01.085) 0:24:58.521 ******** 2026-03-19 05:01:16.637537 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637550 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637562 | orchestrator | 2026-03-19 05:01:16.637573 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 05:01:16.637585 | orchestrator | Thursday 19 March 2026 05:01:05 +0000 (0:00:00.233) 0:24:58.754 ******** 2026-03-19 05:01:16.637596 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637608 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637620 | orchestrator | 2026-03-19 05:01:16.637631 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 05:01:16.637643 | orchestrator | Thursday 19 March 2026 05:01:06 +0000 (0:00:00.543) 0:24:59.297 ******** 2026-03-19 05:01:16.637654 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637666 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637678 | orchestrator | 2026-03-19 05:01:16.637689 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 05:01:16.637701 | orchestrator | Thursday 19 March 2026 05:01:06 +0000 (0:00:00.247) 0:24:59.545 ******** 2026-03-19 05:01:16.637770 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637785 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637798 | orchestrator | 2026-03-19 05:01:16.637809 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 05:01:16.637821 | orchestrator | Thursday 19 March 2026 05:01:06 +0000 (0:00:00.224) 0:24:59.770 ******** 2026-03-19 05:01:16.637833 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637845 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637856 | orchestrator | 2026-03-19 05:01:16.637868 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 05:01:16.637880 | orchestrator | Thursday 19 March 2026 05:01:06 +0000 (0:00:00.247) 0:25:00.017 ******** 2026-03-19 05:01:16.637891 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637903 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637915 | orchestrator | 2026-03-19 05:01:16.637932 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 05:01:16.637944 | orchestrator | Thursday 19 March 2026 05:01:06 +0000 (0:00:00.243) 0:25:00.260 ******** 2026-03-19 05:01:16.637956 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.637968 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.637979 | orchestrator | 2026-03-19 05:01:16.637991 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 05:01:16.638003 | orchestrator | Thursday 19 March 2026 05:01:07 +0000 (0:00:00.293) 0:25:00.554 ******** 2026-03-19 05:01:16.638071 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.638086 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.638098 | orchestrator | 2026-03-19 05:01:16.638109 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 05:01:16.638121 | orchestrator | Thursday 19 March 2026 05:01:07 +0000 (0:00:00.242) 0:25:00.796 ******** 2026-03-19 05:01:16.638133 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.638145 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.638157 | orchestrator | 2026-03-19 05:01:16.638169 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 05:01:16.638180 | orchestrator | Thursday 19 March 2026 05:01:08 +0000 (0:00:00.514) 0:25:01.311 ******** 2026-03-19 05:01:16.638192 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.638204 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.638215 | orchestrator | 2026-03-19 05:01:16.638227 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 05:01:16.638238 | orchestrator | Thursday 19 March 2026 05:01:08 +0000 (0:00:00.264) 0:25:01.575 ******** 2026-03-19 05:01:16.638250 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:16.638262 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:16.638273 | orchestrator | 2026-03-19 05:01:16.638285 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 05:01:16.638297 | orchestrator | Thursday 19 March 2026 05:01:08 +0000 (0:00:00.238) 0:25:01.813 ******** 2026-03-19 05:01:16.638309 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:01:16.638320 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:01:16.638332 | orchestrator | 2026-03-19 05:01:16.638344 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 05:01:16.638355 | orchestrator | Thursday 19 March 2026 05:01:12 +0000 (0:00:03.758) 0:25:05.572 ******** 2026-03-19 05:01:16.638367 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.638379 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:01:16.638391 | orchestrator | 2026-03-19 05:01:16.638416 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 05:01:16.638447 | orchestrator | Thursday 19 March 2026 05:01:12 +0000 (0:00:00.302) 0:25:05.875 ******** 2026-03-19 05:01:16.638461 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-19 05:01:16.638484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-19 05:01:40.193816 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-19 05:01:40.193896 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-19 05:01:40.193902 | orchestrator | 2026-03-19 05:01:40.193907 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 05:01:40.193912 | orchestrator | Thursday 19 March 2026 05:01:16 +0000 (0:00:04.011) 0:25:09.887 ******** 2026-03-19 05:01:40.193916 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.193921 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.193925 | orchestrator | 2026-03-19 05:01:40.193929 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 05:01:40.193933 | orchestrator | Thursday 19 March 2026 05:01:17 +0000 (0:00:00.538) 0:25:10.426 ******** 2026-03-19 05:01:40.193937 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.193941 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.193944 | orchestrator | 2026-03-19 05:01:40.193949 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:01:40.193965 | orchestrator | Thursday 19 March 2026 05:01:17 +0000 (0:00:00.223) 0:25:10.649 ******** 2026-03-19 05:01:40.193969 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.193973 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.193977 | orchestrator | 2026-03-19 05:01:40.193981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:01:40.193985 | orchestrator | Thursday 19 March 2026 05:01:17 +0000 (0:00:00.286) 0:25:10.935 ******** 2026-03-19 05:01:40.193989 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.193993 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.193997 | orchestrator | 2026-03-19 05:01:40.194000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:01:40.194004 | orchestrator | Thursday 19 March 2026 05:01:17 +0000 (0:00:00.265) 0:25:11.201 ******** 2026-03-19 05:01:40.194008 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194048 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.194053 | orchestrator | 2026-03-19 05:01:40.194056 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:01:40.194060 | orchestrator | Thursday 19 March 2026 05:01:18 +0000 (0:00:00.242) 0:25:11.444 ******** 2026-03-19 05:01:40.194064 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194069 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194073 | orchestrator | 2026-03-19 05:01:40.194077 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:01:40.194095 | orchestrator | Thursday 19 March 2026 05:01:18 +0000 (0:00:00.344) 0:25:11.788 ******** 2026-03-19 05:01:40.194099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:01:40.194103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:01:40.194107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:01:40.194111 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194115 | orchestrator | 2026-03-19 05:01:40.194123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:01:40.194127 | orchestrator | Thursday 19 March 2026 05:01:18 +0000 (0:00:00.394) 0:25:12.183 ******** 2026-03-19 05:01:40.194131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:01:40.194135 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:01:40.194139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:01:40.194146 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194151 | orchestrator | 2026-03-19 05:01:40.194158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:01:40.194163 | orchestrator | Thursday 19 March 2026 05:01:19 +0000 (0:00:00.725) 0:25:12.908 ******** 2026-03-19 05:01:40.194169 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:01:40.194175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:01:40.194180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:01:40.194186 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194192 | orchestrator | 2026-03-19 05:01:40.194198 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:01:40.194204 | orchestrator | Thursday 19 March 2026 05:01:20 +0000 (0:00:00.768) 0:25:13.678 ******** 2026-03-19 05:01:40.194211 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194217 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194223 | orchestrator | 2026-03-19 05:01:40.194229 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:01:40.194235 | orchestrator | Thursday 19 March 2026 05:01:20 +0000 (0:00:00.578) 0:25:14.257 ******** 2026-03-19 05:01:40.194240 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 05:01:40.194247 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 05:01:40.194253 | orchestrator | 2026-03-19 05:01:40.194259 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 05:01:40.194265 | orchestrator | Thursday 19 March 2026 05:01:21 +0000 (0:00:00.598) 0:25:14.856 ******** 2026-03-19 05:01:40.194271 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194277 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194282 | orchestrator | 2026-03-19 05:01:40.194301 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-19 05:01:40.194308 | orchestrator | Thursday 19 March 2026 05:01:22 +0000 (0:00:00.984) 0:25:15.840 ******** 2026-03-19 05:01:40.194314 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194320 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.194325 | orchestrator | 2026-03-19 05:01:40.194331 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-19 05:01:40.194336 | orchestrator | Thursday 19 March 2026 05:01:22 +0000 (0:00:00.222) 0:25:16.062 ******** 2026-03-19 05:01:40.194342 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3 2026-03-19 05:01:40.194349 | orchestrator | 2026-03-19 05:01:40.194355 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-19 05:01:40.194361 | orchestrator | Thursday 19 March 2026 05:01:23 +0000 (0:00:00.647) 0:25:16.709 ******** 2026-03-19 05:01:40.194367 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 05:01:40.194385 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-19 05:01:40.194400 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-19 05:01:40.194415 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-19 05:01:40.194422 | orchestrator | 2026-03-19 05:01:40.194428 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-19 05:01:40.194435 | orchestrator | Thursday 19 March 2026 05:01:24 +0000 (0:00:00.985) 0:25:17.695 ******** 2026-03-19 05:01:40.194442 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:01:40.194448 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 05:01:40.194454 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:01:40.194460 | orchestrator | 2026-03-19 05:01:40.194471 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:01:40.194477 | orchestrator | Thursday 19 March 2026 05:01:26 +0000 (0:00:02.526) 0:25:20.222 ******** 2026-03-19 05:01:40.194482 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-19 05:01:40.194489 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 05:01:40.194495 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194501 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-19 05:01:40.194507 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 05:01:40.194513 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194520 | orchestrator | 2026-03-19 05:01:40.194526 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-19 05:01:40.194531 | orchestrator | Thursday 19 March 2026 05:01:28 +0000 (0:00:01.114) 0:25:21.336 ******** 2026-03-19 05:01:40.194537 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194543 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194549 | orchestrator | 2026-03-19 05:01:40.194554 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-19 05:01:40.194560 | orchestrator | Thursday 19 March 2026 05:01:28 +0000 (0:00:00.608) 0:25:21.945 ******** 2026-03-19 05:01:40.194566 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194572 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:40.194579 | orchestrator | 2026-03-19 05:01:40.194585 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-19 05:01:40.194591 | orchestrator | Thursday 19 March 2026 05:01:28 +0000 (0:00:00.234) 0:25:22.180 ******** 2026-03-19 05:01:40.194597 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-3 2026-03-19 05:01:40.194604 | orchestrator | 2026-03-19 05:01:40.194611 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-19 05:01:40.194617 | orchestrator | Thursday 19 March 2026 05:01:29 +0000 (0:00:00.702) 0:25:22.882 ******** 2026-03-19 05:01:40.194623 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3 2026-03-19 05:01:40.194629 | orchestrator | 2026-03-19 05:01:40.194635 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-19 05:01:40.194640 | orchestrator | Thursday 19 March 2026 05:01:29 +0000 (0:00:00.374) 0:25:23.257 ******** 2026-03-19 05:01:40.194646 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194652 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194659 | orchestrator | 2026-03-19 05:01:40.194666 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-19 05:01:40.194672 | orchestrator | Thursday 19 March 2026 05:01:31 +0000 (0:00:01.154) 0:25:24.412 ******** 2026-03-19 05:01:40.194678 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194685 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194692 | orchestrator | 2026-03-19 05:01:40.194699 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-19 05:01:40.194706 | orchestrator | Thursday 19 March 2026 05:01:32 +0000 (0:00:01.028) 0:25:25.440 ******** 2026-03-19 05:01:40.194713 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194719 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194749 | orchestrator | 2026-03-19 05:01:40.194756 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-19 05:01:40.194770 | orchestrator | Thursday 19 March 2026 05:01:33 +0000 (0:00:01.417) 0:25:26.857 ******** 2026-03-19 05:01:40.194777 | orchestrator | changed: [testbed-node-4] 2026-03-19 05:01:40.194784 | orchestrator | changed: [testbed-node-3] 2026-03-19 05:01:40.194792 | orchestrator | 2026-03-19 05:01:40.194799 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-19 05:01:40.194806 | orchestrator | Thursday 19 March 2026 05:01:36 +0000 (0:00:02.914) 0:25:29.772 ******** 2026-03-19 05:01:40.194813 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:01:40.194820 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:40.194827 | orchestrator | 2026-03-19 05:01:40.194834 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-19 05:01:40.194840 | orchestrator | Thursday 19 March 2026 05:01:37 +0000 (0:00:00.881) 0:25:30.653 ******** 2026-03-19 05:01:40.194846 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:01:40.194863 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:01:47.516132 | orchestrator | 2026-03-19 05:01:47.516272 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-19 05:01:47.516301 | orchestrator | 2026-03-19 05:01:47.516321 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:01:47.516340 | orchestrator | Thursday 19 March 2026 05:01:40 +0000 (0:00:02.788) 0:25:33.442 ******** 2026-03-19 05:01:47.516357 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-19 05:01:47.516376 | orchestrator | 2026-03-19 05:01:47.516395 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 05:01:47.516414 | orchestrator | Thursday 19 March 2026 05:01:40 +0000 (0:00:00.283) 0:25:33.725 ******** 2026-03-19 05:01:47.516433 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516451 | orchestrator | 2026-03-19 05:01:47.516469 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 05:01:47.516487 | orchestrator | Thursday 19 March 2026 05:01:40 +0000 (0:00:00.446) 0:25:34.172 ******** 2026-03-19 05:01:47.516506 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516525 | orchestrator | 2026-03-19 05:01:47.516543 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:01:47.516562 | orchestrator | Thursday 19 March 2026 05:01:41 +0000 (0:00:00.149) 0:25:34.321 ******** 2026-03-19 05:01:47.516579 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516598 | orchestrator | 2026-03-19 05:01:47.516615 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:01:47.516633 | orchestrator | Thursday 19 March 2026 05:01:41 +0000 (0:00:00.723) 0:25:35.044 ******** 2026-03-19 05:01:47.516650 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516670 | orchestrator | 2026-03-19 05:01:47.516688 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 05:01:47.516708 | orchestrator | Thursday 19 March 2026 05:01:41 +0000 (0:00:00.153) 0:25:35.198 ******** 2026-03-19 05:01:47.516758 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516780 | orchestrator | 2026-03-19 05:01:47.516823 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 05:01:47.516844 | orchestrator | Thursday 19 March 2026 05:01:42 +0000 (0:00:00.154) 0:25:35.352 ******** 2026-03-19 05:01:47.516862 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.516881 | orchestrator | 2026-03-19 05:01:47.516899 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 05:01:47.516920 | orchestrator | Thursday 19 March 2026 05:01:42 +0000 (0:00:00.172) 0:25:35.525 ******** 2026-03-19 05:01:47.516939 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:47.516958 | orchestrator | 2026-03-19 05:01:47.516977 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 05:01:47.516996 | orchestrator | Thursday 19 March 2026 05:01:42 +0000 (0:00:00.149) 0:25:35.674 ******** 2026-03-19 05:01:47.517015 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.517067 | orchestrator | 2026-03-19 05:01:47.517088 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 05:01:47.517108 | orchestrator | Thursday 19 March 2026 05:01:42 +0000 (0:00:00.132) 0:25:35.806 ******** 2026-03-19 05:01:47.517128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:01:47.517147 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:01:47.517166 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:01:47.517184 | orchestrator | 2026-03-19 05:01:47.517203 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 05:01:47.517221 | orchestrator | Thursday 19 March 2026 05:01:43 +0000 (0:00:00.678) 0:25:36.485 ******** 2026-03-19 05:01:47.517239 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:47.517259 | orchestrator | 2026-03-19 05:01:47.517276 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 05:01:47.517291 | orchestrator | Thursday 19 March 2026 05:01:43 +0000 (0:00:00.334) 0:25:36.819 ******** 2026-03-19 05:01:47.517306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:01:47.517324 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:01:47.517341 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:01:47.517358 | orchestrator | 2026-03-19 05:01:47.517374 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 05:01:47.517393 | orchestrator | Thursday 19 March 2026 05:01:45 +0000 (0:00:02.173) 0:25:38.993 ******** 2026-03-19 05:01:47.517412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 05:01:47.517430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 05:01:47.517449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 05:01:47.517467 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:47.517484 | orchestrator | 2026-03-19 05:01:47.517501 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 05:01:47.517519 | orchestrator | Thursday 19 March 2026 05:01:46 +0000 (0:00:00.427) 0:25:39.421 ******** 2026-03-19 05:01:47.517541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517633 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:47.517652 | orchestrator | 2026-03-19 05:01:47.517670 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 05:01:47.517689 | orchestrator | Thursday 19 March 2026 05:01:47 +0000 (0:00:00.987) 0:25:40.408 ******** 2026-03-19 05:01:47.517710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:47.517837 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:47.517856 | orchestrator | 2026-03-19 05:01:47.517874 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 05:01:47.517893 | orchestrator | Thursday 19 March 2026 05:01:47 +0000 (0:00:00.167) 0:25:40.576 ******** 2026-03-19 05:01:47.517914 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 05:01:44.145873', 'end': '2026-03-19 05:01:44.193140', 'delta': '0:00:00.047267', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 05:01:47.517939 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 05:01:44.701937', 'end': '2026-03-19 05:01:44.751479', 'delta': '0:00:00.049542', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 05:01:47.517958 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 05:01:45.539861', 'end': '2026-03-19 05:01:45.585892', 'delta': '0:00:00.046031', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 05:01:47.517977 | orchestrator | 2026-03-19 05:01:47.518012 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 05:01:52.245199 | orchestrator | Thursday 19 March 2026 05:01:47 +0000 (0:00:00.194) 0:25:40.771 ******** 2026-03-19 05:01:52.245302 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.245318 | orchestrator | 2026-03-19 05:01:52.245331 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 05:01:52.245342 | orchestrator | Thursday 19 March 2026 05:01:48 +0000 (0:00:00.718) 0:25:41.490 ******** 2026-03-19 05:01:52.245354 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.245393 | orchestrator | 2026-03-19 05:01:52.245405 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 05:01:52.245416 | orchestrator | Thursday 19 March 2026 05:01:49 +0000 (0:00:01.001) 0:25:42.491 ******** 2026-03-19 05:01:52.245428 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.245439 | orchestrator | 2026-03-19 05:01:52.245450 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 05:01:52.245461 | orchestrator | Thursday 19 March 2026 05:01:49 +0000 (0:00:00.159) 0:25:42.651 ******** 2026-03-19 05:01:52.245472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:01:52.245483 | orchestrator | 2026-03-19 05:01:52.245582 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:01:52.245598 | orchestrator | Thursday 19 March 2026 05:01:50 +0000 (0:00:01.058) 0:25:43.709 ******** 2026-03-19 05:01:52.245609 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.245620 | orchestrator | 2026-03-19 05:01:52.245631 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 05:01:52.245643 | orchestrator | Thursday 19 March 2026 05:01:50 +0000 (0:00:00.181) 0:25:43.890 ******** 2026-03-19 05:01:52.245654 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.245665 | orchestrator | 2026-03-19 05:01:52.245676 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 05:01:52.245702 | orchestrator | Thursday 19 March 2026 05:01:50 +0000 (0:00:00.132) 0:25:44.023 ******** 2026-03-19 05:01:52.245713 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.245724 | orchestrator | 2026-03-19 05:01:52.245795 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:01:52.245810 | orchestrator | Thursday 19 March 2026 05:01:50 +0000 (0:00:00.222) 0:25:44.246 ******** 2026-03-19 05:01:52.245822 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.245835 | orchestrator | 2026-03-19 05:01:52.245901 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 05:01:52.245915 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.132) 0:25:44.378 ******** 2026-03-19 05:01:52.245929 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.245940 | orchestrator | 2026-03-19 05:01:52.245953 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 05:01:52.245992 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.133) 0:25:44.511 ******** 2026-03-19 05:01:52.246006 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.246109 | orchestrator | 2026-03-19 05:01:52.246126 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 05:01:52.246139 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.173) 0:25:44.685 ******** 2026-03-19 05:01:52.246150 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.246161 | orchestrator | 2026-03-19 05:01:52.246172 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 05:01:52.246183 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.147) 0:25:44.832 ******** 2026-03-19 05:01:52.246194 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.246205 | orchestrator | 2026-03-19 05:01:52.246244 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 05:01:52.246255 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.176) 0:25:45.008 ******** 2026-03-19 05:01:52.246266 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.246277 | orchestrator | 2026-03-19 05:01:52.246288 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 05:01:52.246301 | orchestrator | Thursday 19 March 2026 05:01:51 +0000 (0:00:00.117) 0:25:45.126 ******** 2026-03-19 05:01:52.246312 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:01:52.246322 | orchestrator | 2026-03-19 05:01:52.246333 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 05:01:52.246345 | orchestrator | Thursday 19 March 2026 05:01:52 +0000 (0:00:00.159) 0:25:45.286 ******** 2026-03-19 05:01:52.246358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.246385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}})  2026-03-19 05:01:52.246422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:01:52.246443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}})  2026-03-19 05:01:52.246456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.246468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.246480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 05:01:52.246499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.246511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:01:52.246531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.864866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}})  2026-03-19 05:01:52.865026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}})  2026-03-19 05:01:52.865056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.865078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:01:52.865131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.865145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:01:52.865162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:01:52.865174 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:01:52.865186 | orchestrator | 2026-03-19 05:01:52.865197 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 05:01:52.865208 | orchestrator | Thursday 19 March 2026 05:01:52 +0000 (0:00:00.615) 0:25:45.901 ******** 2026-03-19 05:01:52.865219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:52.865237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e', 'dm-uuid-LVM-tKomHJTMlNUD0zk4AOsWK0hZxqX95vWXnjWYRyKXrSi4hVi0OytFF40eCBiNeUgp'], 'uuids': ['ce00926a-8920-482f-aac1-989231e28d63'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:52.865248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422', 'scsi-SQEMU_QEMU_HARDDISK_39b473cc-c557-499b-ae61-29aaa57bd422'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '39b473cc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:52.865267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oOLfl5-IuUq-yk2W-CFze-Fnb3-FYP3-tWbWI4', 'scsi-0QEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d', 'scsi-SQEMU_QEMU_HARDDISK_57dec018-1465-4558-908d-748a1c147c6d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.059958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M', 'dm-uuid-CRYPT-LUKS2-e21c4ca452c14e1186606d25edfe5b5f-p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--55f97389--0425--5b31--8593--f3b3ad53d7f9-osd--block--55f97389--0425--5b31--8593--f3b3ad53d7f9', 'dm-uuid-LVM-NcMh0hsizRlOQbqIRPqpBhorKdkbTdPXp4DIDUljPTxbR9E1DVB6oPx5dXL0oZ5M'], 'uuids': ['e21c4ca4-52c1-4e11-8660-6d25edfe5b5f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '57dec018', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p4DIDU-ljPT-xbR9-E1DV-B6oP-x5dX-L0oZ5M']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ZrCDEJ-gdv6-UCW3-XJIc-Xzsd-HjYm-Ii0HSK', 'scsi-0QEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1', 'scsi-SQEMU_QEMU_HARDDISK_882bbde8-c2a7-4908-ad99-b7a0a7d616d1'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '882bbde8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--432058d8--20d3--534b--84ac--2a35b6cfcd9e-osd--block--432058d8--20d3--534b--84ac--2a35b6cfcd9e']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:01:53.060242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd4a185e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd4a185e-e644-4224-9e55-45e03a3199c2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:02:01.608069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:02:01.608208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:02:01.608226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp', 'dm-uuid-CRYPT-LUKS2-ce00926a8920482faac1989231e28d63-njWYRy-KXrS-i4hV-i0Oy-tFF4-0eCB-iNeUgp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:02:01.608240 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608254 | orchestrator | 2026-03-19 05:02:01.608266 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 05:02:01.608279 | orchestrator | Thursday 19 March 2026 05:01:53 +0000 (0:00:00.413) 0:25:46.315 ******** 2026-03-19 05:02:01.608291 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:01.608303 | orchestrator | 2026-03-19 05:02:01.608314 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 05:02:01.608325 | orchestrator | Thursday 19 March 2026 05:01:53 +0000 (0:00:00.536) 0:25:46.851 ******** 2026-03-19 05:02:01.608336 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:01.608347 | orchestrator | 2026-03-19 05:02:01.608358 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:02:01.608369 | orchestrator | Thursday 19 March 2026 05:01:53 +0000 (0:00:00.137) 0:25:46.989 ******** 2026-03-19 05:02:01.608380 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:01.608391 | orchestrator | 2026-03-19 05:02:01.608403 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:02:01.608414 | orchestrator | Thursday 19 March 2026 05:01:54 +0000 (0:00:00.478) 0:25:47.468 ******** 2026-03-19 05:02:01.608425 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608436 | orchestrator | 2026-03-19 05:02:01.608447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:02:01.608458 | orchestrator | Thursday 19 March 2026 05:01:54 +0000 (0:00:00.140) 0:25:47.608 ******** 2026-03-19 05:02:01.608469 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608480 | orchestrator | 2026-03-19 05:02:01.608491 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:02:01.608502 | orchestrator | Thursday 19 March 2026 05:01:54 +0000 (0:00:00.232) 0:25:47.841 ******** 2026-03-19 05:02:01.608513 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608524 | orchestrator | 2026-03-19 05:02:01.608535 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 05:02:01.608546 | orchestrator | Thursday 19 March 2026 05:01:54 +0000 (0:00:00.156) 0:25:47.997 ******** 2026-03-19 05:02:01.608557 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-19 05:02:01.608570 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-19 05:02:01.608588 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-19 05:02:01.608600 | orchestrator | 2026-03-19 05:02:01.608611 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 05:02:01.608622 | orchestrator | Thursday 19 March 2026 05:01:55 +0000 (0:00:00.983) 0:25:48.981 ******** 2026-03-19 05:02:01.608633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-19 05:02:01.608659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-19 05:02:01.608670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-19 05:02:01.608681 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608692 | orchestrator | 2026-03-19 05:02:01.608703 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 05:02:01.608714 | orchestrator | Thursday 19 March 2026 05:01:55 +0000 (0:00:00.194) 0:25:49.176 ******** 2026-03-19 05:02:01.608809 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-19 05:02:01.608833 | orchestrator | 2026-03-19 05:02:01.608851 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:02:01.608864 | orchestrator | Thursday 19 March 2026 05:01:56 +0000 (0:00:00.208) 0:25:49.384 ******** 2026-03-19 05:02:01.608876 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608887 | orchestrator | 2026-03-19 05:02:01.608898 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:02:01.608909 | orchestrator | Thursday 19 March 2026 05:01:56 +0000 (0:00:00.142) 0:25:49.526 ******** 2026-03-19 05:02:01.608920 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608931 | orchestrator | 2026-03-19 05:02:01.608942 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:02:01.608953 | orchestrator | Thursday 19 March 2026 05:01:56 +0000 (0:00:00.426) 0:25:49.953 ******** 2026-03-19 05:02:01.608964 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.608974 | orchestrator | 2026-03-19 05:02:01.608986 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:02:01.608997 | orchestrator | Thursday 19 March 2026 05:01:56 +0000 (0:00:00.158) 0:25:50.112 ******** 2026-03-19 05:02:01.609008 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:01.609019 | orchestrator | 2026-03-19 05:02:01.609131 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:02:01.609151 | orchestrator | Thursday 19 March 2026 05:01:57 +0000 (0:00:00.264) 0:25:50.376 ******** 2026-03-19 05:02:01.609170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:01.609182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:01.609192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:01.609203 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.609214 | orchestrator | 2026-03-19 05:02:01.609224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:02:01.609235 | orchestrator | Thursday 19 March 2026 05:01:57 +0000 (0:00:00.383) 0:25:50.760 ******** 2026-03-19 05:02:01.609246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:01.609257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:01.609268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:01.609278 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.609289 | orchestrator | 2026-03-19 05:02:01.609300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:02:01.609311 | orchestrator | Thursday 19 March 2026 05:01:57 +0000 (0:00:00.438) 0:25:51.198 ******** 2026-03-19 05:02:01.609321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:01.609332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:01.609343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:01.609364 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:01.609375 | orchestrator | 2026-03-19 05:02:01.609386 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:02:01.609397 | orchestrator | Thursday 19 March 2026 05:01:58 +0000 (0:00:00.411) 0:25:51.609 ******** 2026-03-19 05:02:01.609408 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:01.609419 | orchestrator | 2026-03-19 05:02:01.609430 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:02:01.609441 | orchestrator | Thursday 19 March 2026 05:01:58 +0000 (0:00:00.173) 0:25:51.783 ******** 2026-03-19 05:02:01.609451 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 05:02:01.609462 | orchestrator | 2026-03-19 05:02:01.609473 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 05:02:01.609484 | orchestrator | Thursday 19 March 2026 05:01:58 +0000 (0:00:00.345) 0:25:52.128 ******** 2026-03-19 05:02:01.609494 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:02:01.609505 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:02:01.609516 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:02:01.609527 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 05:02:01.609538 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 05:02:01.609549 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:02:01.609559 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:02:01.609570 | orchestrator | 2026-03-19 05:02:01.609581 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 05:02:01.609592 | orchestrator | Thursday 19 March 2026 05:01:59 +0000 (0:00:01.124) 0:25:53.253 ******** 2026-03-19 05:02:01.609602 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:02:01.609613 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:02:01.609624 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:02:01.609642 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-19 05:02:01.609653 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 05:02:01.609664 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:02:01.609675 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:02:01.609686 | orchestrator | 2026-03-19 05:02:01.609706 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-19 05:02:16.498705 | orchestrator | Thursday 19 March 2026 05:02:01 +0000 (0:00:01.603) 0:25:54.857 ******** 2026-03-19 05:02:16.498924 | orchestrator | changed: [testbed-node-3] 2026-03-19 05:02:16.498943 | orchestrator | 2026-03-19 05:02:16.498956 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-19 05:02:16.498968 | orchestrator | Thursday 19 March 2026 05:02:02 +0000 (0:00:01.311) 0:25:56.168 ******** 2026-03-19 05:02:16.498980 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:02:16.498992 | orchestrator | 2026-03-19 05:02:16.499003 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-19 05:02:16.499014 | orchestrator | Thursday 19 March 2026 05:02:04 +0000 (0:00:01.920) 0:25:58.089 ******** 2026-03-19 05:02:16.499040 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:02:16.499052 | orchestrator | 2026-03-19 05:02:16.499063 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 05:02:16.499101 | orchestrator | Thursday 19 March 2026 05:02:06 +0000 (0:00:01.491) 0:25:59.581 ******** 2026-03-19 05:02:16.499113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-19 05:02:16.499124 | orchestrator | 2026-03-19 05:02:16.499136 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 05:02:16.499156 | orchestrator | Thursday 19 March 2026 05:02:06 +0000 (0:00:00.156) 0:25:59.738 ******** 2026-03-19 05:02:16.499175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-19 05:02:16.499193 | orchestrator | 2026-03-19 05:02:16.499211 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 05:02:16.499230 | orchestrator | Thursday 19 March 2026 05:02:06 +0000 (0:00:00.176) 0:25:59.914 ******** 2026-03-19 05:02:16.499249 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.499270 | orchestrator | 2026-03-19 05:02:16.499289 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 05:02:16.499307 | orchestrator | Thursday 19 March 2026 05:02:06 +0000 (0:00:00.118) 0:26:00.032 ******** 2026-03-19 05:02:16.499327 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.499348 | orchestrator | 2026-03-19 05:02:16.499367 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 05:02:16.499387 | orchestrator | Thursday 19 March 2026 05:02:07 +0000 (0:00:00.512) 0:26:00.544 ******** 2026-03-19 05:02:16.499407 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.499427 | orchestrator | 2026-03-19 05:02:16.499449 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 05:02:16.499468 | orchestrator | Thursday 19 March 2026 05:02:07 +0000 (0:00:00.527) 0:26:01.072 ******** 2026-03-19 05:02:16.499488 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.499508 | orchestrator | 2026-03-19 05:02:16.499528 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 05:02:16.499548 | orchestrator | Thursday 19 March 2026 05:02:08 +0000 (0:00:00.562) 0:26:01.635 ******** 2026-03-19 05:02:16.499566 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.499595 | orchestrator | 2026-03-19 05:02:16.499616 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 05:02:16.499633 | orchestrator | Thursday 19 March 2026 05:02:08 +0000 (0:00:00.150) 0:26:01.785 ******** 2026-03-19 05:02:16.499650 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.499668 | orchestrator | 2026-03-19 05:02:16.499685 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 05:02:16.499703 | orchestrator | Thursday 19 March 2026 05:02:08 +0000 (0:00:00.123) 0:26:01.909 ******** 2026-03-19 05:02:16.499719 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.499767 | orchestrator | 2026-03-19 05:02:16.499784 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 05:02:16.499803 | orchestrator | Thursday 19 March 2026 05:02:08 +0000 (0:00:00.129) 0:26:02.039 ******** 2026-03-19 05:02:16.499820 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.499836 | orchestrator | 2026-03-19 05:02:16.499854 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 05:02:16.499871 | orchestrator | Thursday 19 March 2026 05:02:09 +0000 (0:00:00.544) 0:26:02.584 ******** 2026-03-19 05:02:16.499889 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.499907 | orchestrator | 2026-03-19 05:02:16.499925 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 05:02:16.499944 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.857) 0:26:03.441 ******** 2026-03-19 05:02:16.499963 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.499981 | orchestrator | 2026-03-19 05:02:16.500001 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 05:02:16.500017 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.134) 0:26:03.575 ******** 2026-03-19 05:02:16.500028 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500039 | orchestrator | 2026-03-19 05:02:16.500063 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 05:02:16.500075 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.172) 0:26:03.747 ******** 2026-03-19 05:02:16.500086 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.500097 | orchestrator | 2026-03-19 05:02:16.500108 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 05:02:16.500134 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.156) 0:26:03.904 ******** 2026-03-19 05:02:16.500145 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.500156 | orchestrator | 2026-03-19 05:02:16.500167 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 05:02:16.500178 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.153) 0:26:04.058 ******** 2026-03-19 05:02:16.500189 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.500200 | orchestrator | 2026-03-19 05:02:16.500234 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 05:02:16.500246 | orchestrator | Thursday 19 March 2026 05:02:10 +0000 (0:00:00.167) 0:26:04.225 ******** 2026-03-19 05:02:16.500257 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500268 | orchestrator | 2026-03-19 05:02:16.500279 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 05:02:16.500290 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.170) 0:26:04.396 ******** 2026-03-19 05:02:16.500301 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500312 | orchestrator | 2026-03-19 05:02:16.500322 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 05:02:16.500333 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.127) 0:26:04.524 ******** 2026-03-19 05:02:16.500344 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500355 | orchestrator | 2026-03-19 05:02:16.500366 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 05:02:16.500380 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.142) 0:26:04.666 ******** 2026-03-19 05:02:16.500404 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.500431 | orchestrator | 2026-03-19 05:02:16.500449 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 05:02:16.500467 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.152) 0:26:04.818 ******** 2026-03-19 05:02:16.500483 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.500499 | orchestrator | 2026-03-19 05:02:16.500516 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 05:02:16.500534 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.237) 0:26:05.056 ******** 2026-03-19 05:02:16.500550 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500567 | orchestrator | 2026-03-19 05:02:16.500585 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 05:02:16.500604 | orchestrator | Thursday 19 March 2026 05:02:11 +0000 (0:00:00.168) 0:26:05.225 ******** 2026-03-19 05:02:16.500623 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500639 | orchestrator | 2026-03-19 05:02:16.500650 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 05:02:16.500661 | orchestrator | Thursday 19 March 2026 05:02:12 +0000 (0:00:00.496) 0:26:05.722 ******** 2026-03-19 05:02:16.500672 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500683 | orchestrator | 2026-03-19 05:02:16.500693 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 05:02:16.500704 | orchestrator | Thursday 19 March 2026 05:02:12 +0000 (0:00:00.136) 0:26:05.859 ******** 2026-03-19 05:02:16.500715 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500726 | orchestrator | 2026-03-19 05:02:16.500771 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 05:02:16.500782 | orchestrator | Thursday 19 March 2026 05:02:12 +0000 (0:00:00.131) 0:26:05.991 ******** 2026-03-19 05:02:16.500793 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500804 | orchestrator | 2026-03-19 05:02:16.500826 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 05:02:16.500837 | orchestrator | Thursday 19 March 2026 05:02:12 +0000 (0:00:00.135) 0:26:06.126 ******** 2026-03-19 05:02:16.500848 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500859 | orchestrator | 2026-03-19 05:02:16.500870 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 05:02:16.500880 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.138) 0:26:06.264 ******** 2026-03-19 05:02:16.500891 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500902 | orchestrator | 2026-03-19 05:02:16.500914 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 05:02:16.500926 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.143) 0:26:06.408 ******** 2026-03-19 05:02:16.500937 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500948 | orchestrator | 2026-03-19 05:02:16.500958 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 05:02:16.500969 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.144) 0:26:06.553 ******** 2026-03-19 05:02:16.500980 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.500991 | orchestrator | 2026-03-19 05:02:16.501002 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 05:02:16.501013 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.132) 0:26:06.685 ******** 2026-03-19 05:02:16.501024 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.501035 | orchestrator | 2026-03-19 05:02:16.501045 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 05:02:16.501056 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.172) 0:26:06.858 ******** 2026-03-19 05:02:16.501067 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.501078 | orchestrator | 2026-03-19 05:02:16.501089 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 05:02:16.501100 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.141) 0:26:07.000 ******** 2026-03-19 05:02:16.501111 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:16.501122 | orchestrator | 2026-03-19 05:02:16.501133 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 05:02:16.501144 | orchestrator | Thursday 19 March 2026 05:02:13 +0000 (0:00:00.213) 0:26:07.214 ******** 2026-03-19 05:02:16.501155 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.501166 | orchestrator | 2026-03-19 05:02:16.501177 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 05:02:16.501188 | orchestrator | Thursday 19 March 2026 05:02:14 +0000 (0:00:00.953) 0:26:08.167 ******** 2026-03-19 05:02:16.501207 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:16.501218 | orchestrator | 2026-03-19 05:02:16.501229 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 05:02:16.501240 | orchestrator | Thursday 19 March 2026 05:02:16 +0000 (0:00:01.255) 0:26:09.423 ******** 2026-03-19 05:02:16.501251 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-19 05:02:16.501263 | orchestrator | 2026-03-19 05:02:16.501274 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 05:02:16.501295 | orchestrator | Thursday 19 March 2026 05:02:16 +0000 (0:00:00.330) 0:26:09.753 ******** 2026-03-19 05:02:31.906961 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907079 | orchestrator | 2026-03-19 05:02:31.907095 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 05:02:31.907108 | orchestrator | Thursday 19 March 2026 05:02:16 +0000 (0:00:00.115) 0:26:09.868 ******** 2026-03-19 05:02:31.907120 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907131 | orchestrator | 2026-03-19 05:02:31.907143 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 05:02:31.907154 | orchestrator | Thursday 19 March 2026 05:02:16 +0000 (0:00:00.125) 0:26:09.993 ******** 2026-03-19 05:02:31.907188 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 05:02:31.907200 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 05:02:31.907212 | orchestrator | 2026-03-19 05:02:31.907223 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 05:02:31.907233 | orchestrator | Thursday 19 March 2026 05:02:17 +0000 (0:00:00.853) 0:26:10.847 ******** 2026-03-19 05:02:31.907244 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:31.907256 | orchestrator | 2026-03-19 05:02:31.907268 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 05:02:31.907279 | orchestrator | Thursday 19 March 2026 05:02:18 +0000 (0:00:00.458) 0:26:11.305 ******** 2026-03-19 05:02:31.907290 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907300 | orchestrator | 2026-03-19 05:02:31.907311 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 05:02:31.907323 | orchestrator | Thursday 19 March 2026 05:02:18 +0000 (0:00:00.140) 0:26:11.446 ******** 2026-03-19 05:02:31.907333 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907344 | orchestrator | 2026-03-19 05:02:31.907355 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 05:02:31.907366 | orchestrator | Thursday 19 March 2026 05:02:18 +0000 (0:00:00.131) 0:26:11.578 ******** 2026-03-19 05:02:31.907377 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907388 | orchestrator | 2026-03-19 05:02:31.907398 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 05:02:31.907409 | orchestrator | Thursday 19 March 2026 05:02:18 +0000 (0:00:00.111) 0:26:11.690 ******** 2026-03-19 05:02:31.907420 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-19 05:02:31.907432 | orchestrator | 2026-03-19 05:02:31.907443 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 05:02:31.907454 | orchestrator | Thursday 19 March 2026 05:02:18 +0000 (0:00:00.171) 0:26:11.861 ******** 2026-03-19 05:02:31.907465 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:31.907476 | orchestrator | 2026-03-19 05:02:31.907487 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 05:02:31.907500 | orchestrator | Thursday 19 March 2026 05:02:19 +0000 (0:00:00.653) 0:26:12.515 ******** 2026-03-19 05:02:31.907512 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 05:02:31.907525 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 05:02:31.907539 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 05:02:31.907551 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907564 | orchestrator | 2026-03-19 05:02:31.907576 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 05:02:31.907589 | orchestrator | Thursday 19 March 2026 05:02:19 +0000 (0:00:00.135) 0:26:12.650 ******** 2026-03-19 05:02:31.907602 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907615 | orchestrator | 2026-03-19 05:02:31.907628 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 05:02:31.907640 | orchestrator | Thursday 19 March 2026 05:02:19 +0000 (0:00:00.350) 0:26:13.000 ******** 2026-03-19 05:02:31.907653 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907665 | orchestrator | 2026-03-19 05:02:31.907677 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 05:02:31.907690 | orchestrator | Thursday 19 March 2026 05:02:19 +0000 (0:00:00.173) 0:26:13.174 ******** 2026-03-19 05:02:31.907702 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907714 | orchestrator | 2026-03-19 05:02:31.907727 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 05:02:31.907761 | orchestrator | Thursday 19 March 2026 05:02:20 +0000 (0:00:00.162) 0:26:13.336 ******** 2026-03-19 05:02:31.907774 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907799 | orchestrator | 2026-03-19 05:02:31.907812 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 05:02:31.907825 | orchestrator | Thursday 19 March 2026 05:02:20 +0000 (0:00:00.153) 0:26:13.490 ******** 2026-03-19 05:02:31.907838 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.907851 | orchestrator | 2026-03-19 05:02:31.907863 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 05:02:31.907874 | orchestrator | Thursday 19 March 2026 05:02:20 +0000 (0:00:00.164) 0:26:13.655 ******** 2026-03-19 05:02:31.907885 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:31.907896 | orchestrator | 2026-03-19 05:02:31.907906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 05:02:31.907918 | orchestrator | Thursday 19 March 2026 05:02:21 +0000 (0:00:01.520) 0:26:15.175 ******** 2026-03-19 05:02:31.907955 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:31.907974 | orchestrator | 2026-03-19 05:02:31.907992 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 05:02:31.908011 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.169) 0:26:15.345 ******** 2026-03-19 05:02:31.908030 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-19 05:02:31.908049 | orchestrator | 2026-03-19 05:02:31.908067 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 05:02:31.908104 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.229) 0:26:15.575 ******** 2026-03-19 05:02:31.908116 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908127 | orchestrator | 2026-03-19 05:02:31.908138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 05:02:31.908149 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.149) 0:26:15.724 ******** 2026-03-19 05:02:31.908160 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908171 | orchestrator | 2026-03-19 05:02:31.908182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 05:02:31.908192 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.157) 0:26:15.882 ******** 2026-03-19 05:02:31.908203 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908214 | orchestrator | 2026-03-19 05:02:31.908225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 05:02:31.908236 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.156) 0:26:16.038 ******** 2026-03-19 05:02:31.908246 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908257 | orchestrator | 2026-03-19 05:02:31.908268 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 05:02:31.908279 | orchestrator | Thursday 19 March 2026 05:02:22 +0000 (0:00:00.164) 0:26:16.203 ******** 2026-03-19 05:02:31.908289 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908300 | orchestrator | 2026-03-19 05:02:31.908311 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 05:02:31.908321 | orchestrator | Thursday 19 March 2026 05:02:23 +0000 (0:00:00.411) 0:26:16.615 ******** 2026-03-19 05:02:31.908332 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908342 | orchestrator | 2026-03-19 05:02:31.908353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 05:02:31.908364 | orchestrator | Thursday 19 March 2026 05:02:23 +0000 (0:00:00.148) 0:26:16.763 ******** 2026-03-19 05:02:31.908375 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908386 | orchestrator | 2026-03-19 05:02:31.908397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 05:02:31.908408 | orchestrator | Thursday 19 March 2026 05:02:23 +0000 (0:00:00.159) 0:26:16.922 ******** 2026-03-19 05:02:31.908419 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:31.908429 | orchestrator | 2026-03-19 05:02:31.908440 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 05:02:31.908451 | orchestrator | Thursday 19 March 2026 05:02:23 +0000 (0:00:00.145) 0:26:17.068 ******** 2026-03-19 05:02:31.908470 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:31.908481 | orchestrator | 2026-03-19 05:02:31.908492 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 05:02:31.908503 | orchestrator | Thursday 19 March 2026 05:02:24 +0000 (0:00:00.233) 0:26:17.301 ******** 2026-03-19 05:02:31.908514 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-19 05:02:31.908525 | orchestrator | 2026-03-19 05:02:31.908535 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 05:02:31.908546 | orchestrator | Thursday 19 March 2026 05:02:24 +0000 (0:00:00.224) 0:26:17.526 ******** 2026-03-19 05:02:31.908557 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-19 05:02:31.908568 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-19 05:02:31.908579 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-19 05:02:31.908590 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-19 05:02:31.908600 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-19 05:02:31.908611 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-19 05:02:31.908622 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-19 05:02:31.908632 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-19 05:02:31.908643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 05:02:31.908654 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 05:02:31.908665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 05:02:31.908675 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 05:02:31.908686 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 05:02:31.908697 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 05:02:31.908707 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-19 05:02:31.908718 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-19 05:02:31.908729 | orchestrator | 2026-03-19 05:02:31.908790 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 05:02:31.908802 | orchestrator | Thursday 19 March 2026 05:02:30 +0000 (0:00:05.885) 0:26:23.411 ******** 2026-03-19 05:02:31.908813 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-19 05:02:31.908824 | orchestrator | 2026-03-19 05:02:31.908835 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 05:02:31.908845 | orchestrator | Thursday 19 March 2026 05:02:30 +0000 (0:00:00.210) 0:26:23.622 ******** 2026-03-19 05:02:31.908856 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:02:31.908868 | orchestrator | 2026-03-19 05:02:31.908885 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 05:02:31.908896 | orchestrator | Thursday 19 March 2026 05:02:30 +0000 (0:00:00.529) 0:26:24.152 ******** 2026-03-19 05:02:31.908907 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:02:31.908918 | orchestrator | 2026-03-19 05:02:31.908929 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 05:02:31.908946 | orchestrator | Thursday 19 March 2026 05:02:31 +0000 (0:00:01.007) 0:26:25.159 ******** 2026-03-19 05:02:52.028627 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028722 | orchestrator | 2026-03-19 05:02:52.028733 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 05:02:52.028785 | orchestrator | Thursday 19 March 2026 05:02:32 +0000 (0:00:00.432) 0:26:25.592 ******** 2026-03-19 05:02:52.028794 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028802 | orchestrator | 2026-03-19 05:02:52.028809 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 05:02:52.028839 | orchestrator | Thursday 19 March 2026 05:02:32 +0000 (0:00:00.150) 0:26:25.742 ******** 2026-03-19 05:02:52.028846 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028853 | orchestrator | 2026-03-19 05:02:52.028860 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 05:02:52.028867 | orchestrator | Thursday 19 March 2026 05:02:32 +0000 (0:00:00.154) 0:26:25.897 ******** 2026-03-19 05:02:52.028874 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028880 | orchestrator | 2026-03-19 05:02:52.028887 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 05:02:52.028894 | orchestrator | Thursday 19 March 2026 05:02:32 +0000 (0:00:00.127) 0:26:26.025 ******** 2026-03-19 05:02:52.028901 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028908 | orchestrator | 2026-03-19 05:02:52.028915 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 05:02:52.028923 | orchestrator | Thursday 19 March 2026 05:02:32 +0000 (0:00:00.139) 0:26:26.164 ******** 2026-03-19 05:02:52.028930 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028936 | orchestrator | 2026-03-19 05:02:52.028943 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 05:02:52.028950 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.142) 0:26:26.306 ******** 2026-03-19 05:02:52.028956 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028963 | orchestrator | 2026-03-19 05:02:52.028970 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 05:02:52.028977 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.142) 0:26:26.449 ******** 2026-03-19 05:02:52.028983 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.028990 | orchestrator | 2026-03-19 05:02:52.028997 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 05:02:52.029004 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.132) 0:26:26.581 ******** 2026-03-19 05:02:52.029010 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029017 | orchestrator | 2026-03-19 05:02:52.029024 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 05:02:52.029031 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.138) 0:26:26.719 ******** 2026-03-19 05:02:52.029038 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029045 | orchestrator | 2026-03-19 05:02:52.029051 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 05:02:52.029058 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.134) 0:26:26.853 ******** 2026-03-19 05:02:52.029065 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029072 | orchestrator | 2026-03-19 05:02:52.029078 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 05:02:52.029085 | orchestrator | Thursday 19 March 2026 05:02:33 +0000 (0:00:00.153) 0:26:27.007 ******** 2026-03-19 05:02:52.029092 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:02:52.029099 | orchestrator | 2026-03-19 05:02:52.029105 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 05:02:52.029112 | orchestrator | Thursday 19 March 2026 05:02:37 +0000 (0:00:03.561) 0:26:30.568 ******** 2026-03-19 05:02:52.029119 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:02:52.029127 | orchestrator | 2026-03-19 05:02:52.029134 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 05:02:52.029141 | orchestrator | Thursday 19 March 2026 05:02:37 +0000 (0:00:00.180) 0:26:30.749 ******** 2026-03-19 05:02:52.029149 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-19 05:02:52.029165 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-19 05:02:52.029174 | orchestrator | 2026-03-19 05:02:52.029181 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 05:02:52.029199 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:04.621) 0:26:35.371 ******** 2026-03-19 05:02:52.029206 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029213 | orchestrator | 2026-03-19 05:02:52.029220 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 05:02:52.029227 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:00.143) 0:26:35.514 ******** 2026-03-19 05:02:52.029234 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029240 | orchestrator | 2026-03-19 05:02:52.029247 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:02:52.029267 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:00.130) 0:26:35.645 ******** 2026-03-19 05:02:52.029274 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029281 | orchestrator | 2026-03-19 05:02:52.029288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:02:52.029295 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:00.170) 0:26:35.816 ******** 2026-03-19 05:02:52.029302 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029309 | orchestrator | 2026-03-19 05:02:52.029316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:02:52.029322 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:00.176) 0:26:35.993 ******** 2026-03-19 05:02:52.029329 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029336 | orchestrator | 2026-03-19 05:02:52.029343 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:02:52.029350 | orchestrator | Thursday 19 March 2026 05:02:42 +0000 (0:00:00.166) 0:26:36.160 ******** 2026-03-19 05:02:52.029357 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:52.029364 | orchestrator | 2026-03-19 05:02:52.029371 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:02:52.029377 | orchestrator | Thursday 19 March 2026 05:02:43 +0000 (0:00:00.258) 0:26:36.418 ******** 2026-03-19 05:02:52.029384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:52.029391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:52.029398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:52.029405 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029412 | orchestrator | 2026-03-19 05:02:52.029419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:02:52.029426 | orchestrator | Thursday 19 March 2026 05:02:43 +0000 (0:00:00.453) 0:26:36.871 ******** 2026-03-19 05:02:52.029433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:52.029439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:52.029446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:52.029453 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029460 | orchestrator | 2026-03-19 05:02:52.029467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:02:52.029474 | orchestrator | Thursday 19 March 2026 05:02:44 +0000 (0:00:00.422) 0:26:37.294 ******** 2026-03-19 05:02:52.029481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-19 05:02:52.029487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-19 05:02:52.029499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-19 05:02:52.029506 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029513 | orchestrator | 2026-03-19 05:02:52.029520 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:02:52.029527 | orchestrator | Thursday 19 March 2026 05:02:44 +0000 (0:00:00.428) 0:26:37.722 ******** 2026-03-19 05:02:52.029533 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:52.029540 | orchestrator | 2026-03-19 05:02:52.029547 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:02:52.029554 | orchestrator | Thursday 19 March 2026 05:02:44 +0000 (0:00:00.172) 0:26:37.894 ******** 2026-03-19 05:02:52.029561 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-19 05:02:52.029568 | orchestrator | 2026-03-19 05:02:52.029575 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 05:02:52.029581 | orchestrator | Thursday 19 March 2026 05:02:45 +0000 (0:00:00.437) 0:26:38.332 ******** 2026-03-19 05:02:52.029588 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:52.029595 | orchestrator | 2026-03-19 05:02:52.029602 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-19 05:02:52.029609 | orchestrator | Thursday 19 March 2026 05:02:46 +0000 (0:00:01.522) 0:26:39.854 ******** 2026-03-19 05:02:52.029616 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-19 05:02:52.029622 | orchestrator | 2026-03-19 05:02:52.029629 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:02:52.029636 | orchestrator | Thursday 19 March 2026 05:02:47 +0000 (0:00:00.568) 0:26:40.423 ******** 2026-03-19 05:02:52.029643 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:02:52.029650 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 05:02:52.029657 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:02:52.029664 | orchestrator | 2026-03-19 05:02:52.029671 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:02:52.029678 | orchestrator | Thursday 19 March 2026 05:02:49 +0000 (0:00:02.399) 0:26:42.822 ******** 2026-03-19 05:02:52.029689 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-19 05:02:52.029700 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-19 05:02:52.029711 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:02:52.029722 | orchestrator | 2026-03-19 05:02:52.029732 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-19 05:02:52.029759 | orchestrator | Thursday 19 March 2026 05:02:50 +0000 (0:00:01.080) 0:26:43.902 ******** 2026-03-19 05:02:52.029770 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:02:52.029779 | orchestrator | 2026-03-19 05:02:52.029796 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-19 05:02:52.029807 | orchestrator | Thursday 19 March 2026 05:02:50 +0000 (0:00:00.144) 0:26:44.047 ******** 2026-03-19 05:02:52.029819 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-19 05:02:52.029831 | orchestrator | 2026-03-19 05:02:52.029843 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-19 05:02:52.029853 | orchestrator | Thursday 19 March 2026 05:02:51 +0000 (0:00:00.557) 0:26:44.605 ******** 2026-03-19 05:02:52.029872 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:03:47.309292 | orchestrator | 2026-03-19 05:03:47.309413 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-19 05:03:47.309431 | orchestrator | Thursday 19 March 2026 05:02:52 +0000 (0:00:00.674) 0:26:45.279 ******** 2026-03-19 05:03:47.309443 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:03:47.309455 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 05:03:47.309492 | orchestrator | 2026-03-19 05:03:47.309504 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:03:47.309515 | orchestrator | Thursday 19 March 2026 05:02:56 +0000 (0:00:04.766) 0:26:50.046 ******** 2026-03-19 05:03:47.309526 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:03:47.309537 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:03:47.309548 | orchestrator | 2026-03-19 05:03:47.309559 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:03:47.309570 | orchestrator | Thursday 19 March 2026 05:02:59 +0000 (0:00:02.359) 0:26:52.406 ******** 2026-03-19 05:03:47.309581 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-19 05:03:47.309592 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:03:47.309603 | orchestrator | 2026-03-19 05:03:47.309614 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-19 05:03:47.309625 | orchestrator | Thursday 19 March 2026 05:03:00 +0000 (0:00:00.974) 0:26:53.381 ******** 2026-03-19 05:03:47.309636 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-19 05:03:47.309647 | orchestrator | 2026-03-19 05:03:47.309658 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-19 05:03:47.309668 | orchestrator | Thursday 19 March 2026 05:03:01 +0000 (0:00:00.924) 0:26:54.306 ******** 2026-03-19 05:03:47.309679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309734 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:03:47.309745 | orchestrator | 2026-03-19 05:03:47.309756 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-19 05:03:47.309801 | orchestrator | Thursday 19 March 2026 05:03:01 +0000 (0:00:00.627) 0:26:54.933 ******** 2026-03-19 05:03:47.309829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:03:47.309924 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:03:47.309940 | orchestrator | 2026-03-19 05:03:47.309958 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-19 05:03:47.309976 | orchestrator | Thursday 19 March 2026 05:03:02 +0000 (0:00:00.634) 0:26:55.568 ******** 2026-03-19 05:03:47.309993 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:03:47.310011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:03:47.310135 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:03:47.310158 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:03:47.310177 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:03:47.310194 | orchestrator | 2026-03-19 05:03:47.310213 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-19 05:03:47.310260 | orchestrator | Thursday 19 March 2026 05:03:36 +0000 (0:00:33.869) 0:27:29.438 ******** 2026-03-19 05:03:47.310279 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:03:47.310299 | orchestrator | 2026-03-19 05:03:47.310318 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-19 05:03:47.310337 | orchestrator | Thursday 19 March 2026 05:03:36 +0000 (0:00:00.148) 0:27:29.586 ******** 2026-03-19 05:03:47.310355 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:03:47.310373 | orchestrator | 2026-03-19 05:03:47.310393 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-19 05:03:47.310412 | orchestrator | Thursday 19 March 2026 05:03:36 +0000 (0:00:00.131) 0:27:29.718 ******** 2026-03-19 05:03:47.310431 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-19 05:03:47.310446 | orchestrator | 2026-03-19 05:03:47.310457 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-19 05:03:47.310468 | orchestrator | Thursday 19 March 2026 05:03:37 +0000 (0:00:00.599) 0:27:30.317 ******** 2026-03-19 05:03:47.310479 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-19 05:03:47.310489 | orchestrator | 2026-03-19 05:03:47.310500 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-19 05:03:47.310511 | orchestrator | Thursday 19 March 2026 05:03:37 +0000 (0:00:00.556) 0:27:30.873 ******** 2026-03-19 05:03:47.310522 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:03:47.310533 | orchestrator | 2026-03-19 05:03:47.310544 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-19 05:03:47.310555 | orchestrator | Thursday 19 March 2026 05:03:38 +0000 (0:00:01.089) 0:27:31.963 ******** 2026-03-19 05:03:47.310566 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:03:47.310577 | orchestrator | 2026-03-19 05:03:47.310588 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-19 05:03:47.310598 | orchestrator | Thursday 19 March 2026 05:03:39 +0000 (0:00:00.975) 0:27:32.939 ******** 2026-03-19 05:03:47.310609 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:03:47.310620 | orchestrator | 2026-03-19 05:03:47.310631 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-19 05:03:47.310642 | orchestrator | Thursday 19 March 2026 05:03:40 +0000 (0:00:01.285) 0:27:34.224 ******** 2026-03-19 05:03:47.310653 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-19 05:03:47.310664 | orchestrator | 2026-03-19 05:03:47.310674 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-19 05:03:47.310685 | orchestrator | 2026-03-19 05:03:47.310696 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:03:47.310706 | orchestrator | Thursday 19 March 2026 05:03:44 +0000 (0:00:03.191) 0:27:37.415 ******** 2026-03-19 05:03:47.310717 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-19 05:03:47.310728 | orchestrator | 2026-03-19 05:03:47.310738 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 05:03:47.310749 | orchestrator | Thursday 19 March 2026 05:03:44 +0000 (0:00:00.239) 0:27:37.655 ******** 2026-03-19 05:03:47.310810 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.310823 | orchestrator | 2026-03-19 05:03:47.310834 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 05:03:47.310845 | orchestrator | Thursday 19 March 2026 05:03:44 +0000 (0:00:00.547) 0:27:38.203 ******** 2026-03-19 05:03:47.310855 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.310866 | orchestrator | 2026-03-19 05:03:47.310877 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:03:47.310888 | orchestrator | Thursday 19 March 2026 05:03:45 +0000 (0:00:00.144) 0:27:38.347 ******** 2026-03-19 05:03:47.310899 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.310909 | orchestrator | 2026-03-19 05:03:47.310920 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:03:47.310931 | orchestrator | Thursday 19 March 2026 05:03:45 +0000 (0:00:00.484) 0:27:38.832 ******** 2026-03-19 05:03:47.310942 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.310953 | orchestrator | 2026-03-19 05:03:47.310963 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 05:03:47.310974 | orchestrator | Thursday 19 March 2026 05:03:45 +0000 (0:00:00.135) 0:27:38.967 ******** 2026-03-19 05:03:47.310985 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.310996 | orchestrator | 2026-03-19 05:03:47.311006 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 05:03:47.311017 | orchestrator | Thursday 19 March 2026 05:03:45 +0000 (0:00:00.148) 0:27:39.116 ******** 2026-03-19 05:03:47.311028 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.311039 | orchestrator | 2026-03-19 05:03:47.311050 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 05:03:47.311060 | orchestrator | Thursday 19 March 2026 05:03:46 +0000 (0:00:00.157) 0:27:39.273 ******** 2026-03-19 05:03:47.311071 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:47.311082 | orchestrator | 2026-03-19 05:03:47.311093 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 05:03:47.311103 | orchestrator | Thursday 19 March 2026 05:03:46 +0000 (0:00:00.156) 0:27:39.430 ******** 2026-03-19 05:03:47.311114 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:47.311125 | orchestrator | 2026-03-19 05:03:47.311143 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 05:03:47.311154 | orchestrator | Thursday 19 March 2026 05:03:46 +0000 (0:00:00.402) 0:27:39.833 ******** 2026-03-19 05:03:47.311165 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:03:47.311176 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:03:47.311186 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:03:47.311197 | orchestrator | 2026-03-19 05:03:47.311208 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 05:03:47.311227 | orchestrator | Thursday 19 March 2026 05:03:47 +0000 (0:00:00.725) 0:27:40.558 ******** 2026-03-19 05:03:54.629905 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.629995 | orchestrator | 2026-03-19 05:03:54.630006 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 05:03:54.630054 | orchestrator | Thursday 19 March 2026 05:03:47 +0000 (0:00:00.263) 0:27:40.822 ******** 2026-03-19 05:03:54.630063 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:03:54.630070 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:03:54.630078 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:03:54.630084 | orchestrator | 2026-03-19 05:03:54.630091 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 05:03:54.630099 | orchestrator | Thursday 19 March 2026 05:03:49 +0000 (0:00:01.879) 0:27:42.701 ******** 2026-03-19 05:03:54.630131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 05:03:54.630139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 05:03:54.630146 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 05:03:54.630153 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630160 | orchestrator | 2026-03-19 05:03:54.630167 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 05:03:54.630174 | orchestrator | Thursday 19 March 2026 05:03:49 +0000 (0:00:00.445) 0:27:43.146 ******** 2026-03-19 05:03:54.630183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630206 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630213 | orchestrator | 2026-03-19 05:03:54.630220 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 05:03:54.630227 | orchestrator | Thursday 19 March 2026 05:03:50 +0000 (0:00:00.649) 0:27:43.796 ******** 2026-03-19 05:03:54.630235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:54.630259 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630266 | orchestrator | 2026-03-19 05:03:54.630273 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 05:03:54.630280 | orchestrator | Thursday 19 March 2026 05:03:50 +0000 (0:00:00.168) 0:27:43.965 ******** 2026-03-19 05:03:54.630313 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 05:03:48.082993', 'end': '2026-03-19 05:03:48.126357', 'delta': '0:00:00.043364', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 05:03:54.630329 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 05:03:48.644816', 'end': '2026-03-19 05:03:48.699876', 'delta': '0:00:00.055060', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 05:03:54.630337 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 05:03:49.224864', 'end': '2026-03-19 05:03:49.280485', 'delta': '0:00:00.055621', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 05:03:54.630344 | orchestrator | 2026-03-19 05:03:54.630351 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 05:03:54.630358 | orchestrator | Thursday 19 March 2026 05:03:50 +0000 (0:00:00.206) 0:27:44.171 ******** 2026-03-19 05:03:54.630364 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.630371 | orchestrator | 2026-03-19 05:03:54.630378 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 05:03:54.630385 | orchestrator | Thursday 19 March 2026 05:03:51 +0000 (0:00:00.277) 0:27:44.448 ******** 2026-03-19 05:03:54.630391 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630398 | orchestrator | 2026-03-19 05:03:54.630405 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 05:03:54.630411 | orchestrator | Thursday 19 March 2026 05:03:51 +0000 (0:00:00.262) 0:27:44.711 ******** 2026-03-19 05:03:54.630418 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.630425 | orchestrator | 2026-03-19 05:03:54.630431 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 05:03:54.630438 | orchestrator | Thursday 19 March 2026 05:03:51 +0000 (0:00:00.150) 0:27:44.862 ******** 2026-03-19 05:03:54.630444 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:03:54.630451 | orchestrator | 2026-03-19 05:03:54.630458 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:03:54.630466 | orchestrator | Thursday 19 March 2026 05:03:52 +0000 (0:00:01.056) 0:27:45.918 ******** 2026-03-19 05:03:54.630474 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.630482 | orchestrator | 2026-03-19 05:03:54.630490 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 05:03:54.630498 | orchestrator | Thursday 19 March 2026 05:03:52 +0000 (0:00:00.154) 0:27:46.073 ******** 2026-03-19 05:03:54.630506 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630513 | orchestrator | 2026-03-19 05:03:54.630521 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 05:03:54.630529 | orchestrator | Thursday 19 March 2026 05:03:52 +0000 (0:00:00.118) 0:27:46.191 ******** 2026-03-19 05:03:54.630537 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630544 | orchestrator | 2026-03-19 05:03:54.630551 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:03:54.630559 | orchestrator | Thursday 19 March 2026 05:03:53 +0000 (0:00:00.969) 0:27:47.161 ******** 2026-03-19 05:03:54.630571 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630579 | orchestrator | 2026-03-19 05:03:54.630587 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 05:03:54.630594 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.131) 0:27:47.293 ******** 2026-03-19 05:03:54.630601 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630609 | orchestrator | 2026-03-19 05:03:54.630617 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 05:03:54.630629 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.131) 0:27:47.424 ******** 2026-03-19 05:03:54.630636 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.630644 | orchestrator | 2026-03-19 05:03:54.630652 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 05:03:54.630660 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.168) 0:27:47.593 ******** 2026-03-19 05:03:54.630667 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:54.630675 | orchestrator | 2026-03-19 05:03:54.630683 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 05:03:54.630691 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.121) 0:27:47.715 ******** 2026-03-19 05:03:54.630698 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:54.630706 | orchestrator | 2026-03-19 05:03:54.630714 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 05:03:54.630726 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.170) 0:27:47.885 ******** 2026-03-19 05:03:55.196071 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:55.196174 | orchestrator | 2026-03-19 05:03:55.196189 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 05:03:55.196201 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.130) 0:27:48.016 ******** 2026-03-19 05:03:55.196213 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:55.196224 | orchestrator | 2026-03-19 05:03:55.196234 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 05:03:55.196245 | orchestrator | Thursday 19 March 2026 05:03:54 +0000 (0:00:00.173) 0:27:48.190 ******** 2026-03-19 05:03:55.196256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}})  2026-03-19 05:03:55.196285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:03:55.196316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}})  2026-03-19 05:03:55.196323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 05:03:55.196373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}})  2026-03-19 05:03:55.196404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}})  2026-03-19 05:03:55.196415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.196432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:03:55.550295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.550441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:03:55.550464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:03:55.550480 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:55.550494 | orchestrator | 2026-03-19 05:03:55.550506 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 05:03:55.550518 | orchestrator | Thursday 19 March 2026 05:03:55 +0000 (0:00:00.407) 0:27:48.598 ******** 2026-03-19 05:03:55.550545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8', 'dm-uuid-LVM-PFY0Rl2lLSDPTqo6L81ajYR9zXNMcgCK2vuZrfDmVDjnhqdE6KPrssslEvjkZoWJ'], 'uuids': ['31574937-1eae-4c97-8290-5d57d110b5bc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8', 'scsi-SQEMU_QEMU_HARDDISK_159498f1-f6fb-4526-96c5-103a28738ba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '159498f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-b67q4i-jc1s-Ww1i-iA1A-GHhQ-WjS2-QyRdKZ', 'scsi-0QEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5', 'scsi-SQEMU_QEMU_HARDDISK_77d1d0bc-0a63-49dd-b34a-7227460faeb5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550627 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-17-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:55.550689 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL', 'dm-uuid-CRYPT-LUKS2-bf8d235a73e24a72a5796ffd881cfbb0-vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b653c337--740c--52f4--bc46--3e8e37039a81-osd--block--b653c337--740c--52f4--bc46--3e8e37039a81', 'dm-uuid-LVM-bgy0lZJMh7sbafoPOYMBv3S4nbDmenixvCt1pgFjFOtxyroLff2vXLsYbvThWbQL'], 'uuids': ['bf8d235a-73e2-4a72-a579-6ffd881cfbb0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '77d1d0bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['vCt1pg-FjFO-txyr-oLff-2vXL-sYbv-ThWbQL']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-sx9LFt-qFem-yEhI-rpDt-nieW-LmkL-JllYOA', 'scsi-0QEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e', 'scsi-SQEMU_QEMU_HARDDISK_740ce1a0-d0ce-4991-9b3f-fd403e7e525e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '740ce1a0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8-osd--block--a2eacdaa--bff5--5a13--b9a9--6af0c62255c8']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b3a0fcd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3a0fcd-108c-44bd-8b62-9d8276f3656e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ', 'dm-uuid-CRYPT-LUKS2-315749371eae4c9782905d57d110b5bc-2vuZrf-DmVD-jnhq-dE6K-Prss-slEv-jkZoWJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:03:56.894526 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:03:56.894557 | orchestrator | 2026-03-19 05:03:56.894578 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 05:03:56.894598 | orchestrator | Thursday 19 March 2026 05:03:55 +0000 (0:00:00.389) 0:27:48.987 ******** 2026-03-19 05:03:56.894618 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:56.894636 | orchestrator | 2026-03-19 05:03:56.894656 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 05:03:56.894675 | orchestrator | Thursday 19 March 2026 05:03:56 +0000 (0:00:00.510) 0:27:49.498 ******** 2026-03-19 05:03:56.894694 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:56.894713 | orchestrator | 2026-03-19 05:03:56.894727 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:03:56.894740 | orchestrator | Thursday 19 March 2026 05:03:56 +0000 (0:00:00.150) 0:27:49.649 ******** 2026-03-19 05:03:56.894753 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:03:56.894834 | orchestrator | 2026-03-19 05:03:56.894850 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:03:56.894875 | orchestrator | Thursday 19 March 2026 05:03:56 +0000 (0:00:00.501) 0:27:50.150 ******** 2026-03-19 05:04:13.977433 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977544 | orchestrator | 2026-03-19 05:04:13.977559 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:04:13.977571 | orchestrator | Thursday 19 March 2026 05:03:57 +0000 (0:00:00.423) 0:27:50.573 ******** 2026-03-19 05:04:13.977581 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977605 | orchestrator | 2026-03-19 05:04:13.977616 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:04:13.977626 | orchestrator | Thursday 19 March 2026 05:03:57 +0000 (0:00:00.246) 0:27:50.819 ******** 2026-03-19 05:04:13.977636 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977646 | orchestrator | 2026-03-19 05:04:13.977656 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 05:04:13.977665 | orchestrator | Thursday 19 March 2026 05:03:57 +0000 (0:00:00.158) 0:27:50.978 ******** 2026-03-19 05:04:13.977676 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-19 05:04:13.977686 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-19 05:04:13.977696 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-19 05:04:13.977706 | orchestrator | 2026-03-19 05:04:13.977716 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 05:04:13.977726 | orchestrator | Thursday 19 March 2026 05:03:58 +0000 (0:00:00.686) 0:27:51.665 ******** 2026-03-19 05:04:13.977736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-19 05:04:13.977746 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-19 05:04:13.977756 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-19 05:04:13.977766 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977846 | orchestrator | 2026-03-19 05:04:13.977859 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 05:04:13.977869 | orchestrator | Thursday 19 March 2026 05:03:58 +0000 (0:00:00.185) 0:27:51.850 ******** 2026-03-19 05:04:13.977879 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-19 05:04:13.977889 | orchestrator | 2026-03-19 05:04:13.977900 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:04:13.977926 | orchestrator | Thursday 19 March 2026 05:03:58 +0000 (0:00:00.209) 0:27:52.060 ******** 2026-03-19 05:04:13.977937 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977947 | orchestrator | 2026-03-19 05:04:13.977956 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:04:13.977966 | orchestrator | Thursday 19 March 2026 05:03:58 +0000 (0:00:00.146) 0:27:52.206 ******** 2026-03-19 05:04:13.977977 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.977990 | orchestrator | 2026-03-19 05:04:13.978083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:04:13.978097 | orchestrator | Thursday 19 March 2026 05:03:59 +0000 (0:00:00.147) 0:27:52.353 ******** 2026-03-19 05:04:13.978108 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.978150 | orchestrator | 2026-03-19 05:04:13.978162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:04:13.978173 | orchestrator | Thursday 19 March 2026 05:03:59 +0000 (0:00:00.153) 0:27:52.507 ******** 2026-03-19 05:04:13.978185 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:13.978197 | orchestrator | 2026-03-19 05:04:13.978209 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:04:13.978224 | orchestrator | Thursday 19 March 2026 05:03:59 +0000 (0:00:00.232) 0:27:52.740 ******** 2026-03-19 05:04:13.978240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:04:13.978265 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:04:13.978284 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:04:13.978301 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.978316 | orchestrator | 2026-03-19 05:04:13.978333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:04:13.978347 | orchestrator | Thursday 19 March 2026 05:04:00 +0000 (0:00:00.711) 0:27:53.451 ******** 2026-03-19 05:04:13.978363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:04:13.978378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:04:13.978393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:04:13.978409 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.978425 | orchestrator | 2026-03-19 05:04:13.978442 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:04:13.978458 | orchestrator | Thursday 19 March 2026 05:04:01 +0000 (0:00:00.842) 0:27:54.294 ******** 2026-03-19 05:04:13.978474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:04:13.978491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:04:13.978507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:04:13.978524 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.978541 | orchestrator | 2026-03-19 05:04:13.978558 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:04:13.978575 | orchestrator | Thursday 19 March 2026 05:04:02 +0000 (0:00:01.094) 0:27:55.388 ******** 2026-03-19 05:04:13.978592 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:13.978609 | orchestrator | 2026-03-19 05:04:13.978624 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:04:13.978634 | orchestrator | Thursday 19 March 2026 05:04:02 +0000 (0:00:00.167) 0:27:55.555 ******** 2026-03-19 05:04:13.978644 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 05:04:13.978654 | orchestrator | 2026-03-19 05:04:13.978663 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 05:04:13.978676 | orchestrator | Thursday 19 March 2026 05:04:02 +0000 (0:00:00.342) 0:27:55.898 ******** 2026-03-19 05:04:13.978717 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:04:13.978734 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:04:13.978749 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:04:13.978763 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:04:13.978804 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 05:04:13.978821 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:04:13.978838 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:04:13.978856 | orchestrator | 2026-03-19 05:04:13.978889 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 05:04:13.978906 | orchestrator | Thursday 19 March 2026 05:04:03 +0000 (0:00:00.815) 0:27:56.714 ******** 2026-03-19 05:04:13.978922 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:04:13.978939 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:04:13.978956 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:04:13.978973 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:04:13.978989 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-19 05:04:13.979006 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-19 05:04:13.979020 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:04:13.979029 | orchestrator | 2026-03-19 05:04:13.979039 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-19 05:04:13.979049 | orchestrator | Thursday 19 March 2026 05:04:05 +0000 (0:00:01.649) 0:27:58.364 ******** 2026-03-19 05:04:13.979059 | orchestrator | changed: [testbed-node-4] 2026-03-19 05:04:13.979072 | orchestrator | 2026-03-19 05:04:13.979089 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-19 05:04:13.979115 | orchestrator | Thursday 19 March 2026 05:04:07 +0000 (0:00:02.332) 0:28:00.696 ******** 2026-03-19 05:04:13.979130 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:04:13.979145 | orchestrator | 2026-03-19 05:04:13.979158 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-19 05:04:13.979173 | orchestrator | Thursday 19 March 2026 05:04:09 +0000 (0:00:02.091) 0:28:02.788 ******** 2026-03-19 05:04:13.979194 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:04:13.979216 | orchestrator | 2026-03-19 05:04:13.979241 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 05:04:13.979267 | orchestrator | Thursday 19 March 2026 05:04:10 +0000 (0:00:01.372) 0:28:04.161 ******** 2026-03-19 05:04:13.979292 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-19 05:04:13.979319 | orchestrator | 2026-03-19 05:04:13.979350 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 05:04:13.979381 | orchestrator | Thursday 19 March 2026 05:04:11 +0000 (0:00:00.191) 0:28:04.352 ******** 2026-03-19 05:04:13.979411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-19 05:04:13.979442 | orchestrator | 2026-03-19 05:04:13.979471 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 05:04:13.979505 | orchestrator | Thursday 19 March 2026 05:04:11 +0000 (0:00:00.196) 0:28:04.548 ******** 2026-03-19 05:04:13.979527 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.979550 | orchestrator | 2026-03-19 05:04:13.979572 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 05:04:13.979590 | orchestrator | Thursday 19 March 2026 05:04:11 +0000 (0:00:00.434) 0:28:04.983 ******** 2026-03-19 05:04:13.979612 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:13.979636 | orchestrator | 2026-03-19 05:04:13.979657 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 05:04:13.979680 | orchestrator | Thursday 19 March 2026 05:04:12 +0000 (0:00:00.574) 0:28:05.558 ******** 2026-03-19 05:04:13.979703 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:13.979725 | orchestrator | 2026-03-19 05:04:13.979748 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 05:04:13.979772 | orchestrator | Thursday 19 March 2026 05:04:12 +0000 (0:00:00.694) 0:28:06.253 ******** 2026-03-19 05:04:13.979844 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:13.979867 | orchestrator | 2026-03-19 05:04:13.979890 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 05:04:13.979913 | orchestrator | Thursday 19 March 2026 05:04:13 +0000 (0:00:00.531) 0:28:06.785 ******** 2026-03-19 05:04:13.979934 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.979957 | orchestrator | 2026-03-19 05:04:13.979980 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 05:04:13.980003 | orchestrator | Thursday 19 March 2026 05:04:13 +0000 (0:00:00.150) 0:28:06.935 ******** 2026-03-19 05:04:13.980025 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.980041 | orchestrator | 2026-03-19 05:04:13.980056 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 05:04:13.980070 | orchestrator | Thursday 19 March 2026 05:04:13 +0000 (0:00:00.137) 0:28:07.073 ******** 2026-03-19 05:04:13.980085 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:13.980101 | orchestrator | 2026-03-19 05:04:13.980115 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 05:04:13.980150 | orchestrator | Thursday 19 March 2026 05:04:13 +0000 (0:00:00.155) 0:28:07.229 ******** 2026-03-19 05:04:25.021723 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.021872 | orchestrator | 2026-03-19 05:04:25.021888 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 05:04:25.021894 | orchestrator | Thursday 19 March 2026 05:04:14 +0000 (0:00:00.539) 0:28:07.769 ******** 2026-03-19 05:04:25.021898 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.021902 | orchestrator | 2026-03-19 05:04:25.021906 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 05:04:25.021910 | orchestrator | Thursday 19 March 2026 05:04:15 +0000 (0:00:00.558) 0:28:08.327 ******** 2026-03-19 05:04:25.021914 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.021919 | orchestrator | 2026-03-19 05:04:25.021923 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 05:04:25.021928 | orchestrator | Thursday 19 March 2026 05:04:15 +0000 (0:00:00.125) 0:28:08.453 ******** 2026-03-19 05:04:25.021931 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.021935 | orchestrator | 2026-03-19 05:04:25.021939 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 05:04:25.021943 | orchestrator | Thursday 19 March 2026 05:04:15 +0000 (0:00:00.121) 0:28:08.575 ******** 2026-03-19 05:04:25.021947 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.021950 | orchestrator | 2026-03-19 05:04:25.021954 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 05:04:25.021958 | orchestrator | Thursday 19 March 2026 05:04:15 +0000 (0:00:00.161) 0:28:08.736 ******** 2026-03-19 05:04:25.021962 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.021966 | orchestrator | 2026-03-19 05:04:25.021969 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 05:04:25.021973 | orchestrator | Thursday 19 March 2026 05:04:15 +0000 (0:00:00.155) 0:28:08.891 ******** 2026-03-19 05:04:25.021977 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.021981 | orchestrator | 2026-03-19 05:04:25.021991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 05:04:25.021995 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.455) 0:28:09.347 ******** 2026-03-19 05:04:25.021999 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022003 | orchestrator | 2026-03-19 05:04:25.022006 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 05:04:25.022050 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.147) 0:28:09.495 ******** 2026-03-19 05:04:25.022054 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022058 | orchestrator | 2026-03-19 05:04:25.022062 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 05:04:25.022066 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.136) 0:28:09.632 ******** 2026-03-19 05:04:25.022084 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022088 | orchestrator | 2026-03-19 05:04:25.022092 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 05:04:25.022096 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.139) 0:28:09.771 ******** 2026-03-19 05:04:25.022100 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022104 | orchestrator | 2026-03-19 05:04:25.022108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 05:04:25.022111 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.174) 0:28:09.946 ******** 2026-03-19 05:04:25.022115 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022119 | orchestrator | 2026-03-19 05:04:25.022123 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 05:04:25.022126 | orchestrator | Thursday 19 March 2026 05:04:16 +0000 (0:00:00.237) 0:28:10.183 ******** 2026-03-19 05:04:25.022130 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022134 | orchestrator | 2026-03-19 05:04:25.022138 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 05:04:25.022141 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.176) 0:28:10.360 ******** 2026-03-19 05:04:25.022145 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022149 | orchestrator | 2026-03-19 05:04:25.022153 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 05:04:25.022157 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.109) 0:28:10.470 ******** 2026-03-19 05:04:25.022161 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022164 | orchestrator | 2026-03-19 05:04:25.022168 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 05:04:25.022172 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.132) 0:28:10.602 ******** 2026-03-19 05:04:25.022176 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022179 | orchestrator | 2026-03-19 05:04:25.022183 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 05:04:25.022187 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.126) 0:28:10.729 ******** 2026-03-19 05:04:25.022191 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022194 | orchestrator | 2026-03-19 05:04:25.022198 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 05:04:25.022202 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.117) 0:28:10.846 ******** 2026-03-19 05:04:25.022206 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022209 | orchestrator | 2026-03-19 05:04:25.022213 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 05:04:25.022217 | orchestrator | Thursday 19 March 2026 05:04:17 +0000 (0:00:00.132) 0:28:10.979 ******** 2026-03-19 05:04:25.022221 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022225 | orchestrator | 2026-03-19 05:04:25.022229 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 05:04:25.022234 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.454) 0:28:11.433 ******** 2026-03-19 05:04:25.022237 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022241 | orchestrator | 2026-03-19 05:04:25.022245 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 05:04:25.022249 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.120) 0:28:11.554 ******** 2026-03-19 05:04:25.022252 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022256 | orchestrator | 2026-03-19 05:04:25.022272 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 05:04:25.022277 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.127) 0:28:11.681 ******** 2026-03-19 05:04:25.022284 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022290 | orchestrator | 2026-03-19 05:04:25.022296 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 05:04:25.022302 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.124) 0:28:11.805 ******** 2026-03-19 05:04:25.022316 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022322 | orchestrator | 2026-03-19 05:04:25.022328 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 05:04:25.022334 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.124) 0:28:11.930 ******** 2026-03-19 05:04:25.022340 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022346 | orchestrator | 2026-03-19 05:04:25.022351 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 05:04:25.022357 | orchestrator | Thursday 19 March 2026 05:04:18 +0000 (0:00:00.194) 0:28:12.124 ******** 2026-03-19 05:04:25.022364 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022370 | orchestrator | 2026-03-19 05:04:25.022377 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 05:04:25.022383 | orchestrator | Thursday 19 March 2026 05:04:19 +0000 (0:00:00.937) 0:28:13.061 ******** 2026-03-19 05:04:25.022389 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022395 | orchestrator | 2026-03-19 05:04:25.022401 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 05:04:25.022408 | orchestrator | Thursday 19 March 2026 05:04:21 +0000 (0:00:01.369) 0:28:14.430 ******** 2026-03-19 05:04:25.022415 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-19 05:04:25.022423 | orchestrator | 2026-03-19 05:04:25.022431 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 05:04:25.022436 | orchestrator | Thursday 19 March 2026 05:04:21 +0000 (0:00:00.212) 0:28:14.642 ******** 2026-03-19 05:04:25.022440 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022445 | orchestrator | 2026-03-19 05:04:25.022449 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 05:04:25.022454 | orchestrator | Thursday 19 March 2026 05:04:21 +0000 (0:00:00.142) 0:28:14.785 ******** 2026-03-19 05:04:25.022459 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022463 | orchestrator | 2026-03-19 05:04:25.022468 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 05:04:25.022473 | orchestrator | Thursday 19 March 2026 05:04:21 +0000 (0:00:00.142) 0:28:14.927 ******** 2026-03-19 05:04:25.022478 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 05:04:25.022485 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 05:04:25.022492 | orchestrator | 2026-03-19 05:04:25.022498 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 05:04:25.022503 | orchestrator | Thursday 19 March 2026 05:04:22 +0000 (0:00:00.797) 0:28:15.725 ******** 2026-03-19 05:04:25.022509 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022515 | orchestrator | 2026-03-19 05:04:25.022521 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 05:04:25.022527 | orchestrator | Thursday 19 March 2026 05:04:23 +0000 (0:00:00.722) 0:28:16.447 ******** 2026-03-19 05:04:25.022534 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022540 | orchestrator | 2026-03-19 05:04:25.022547 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 05:04:25.022554 | orchestrator | Thursday 19 March 2026 05:04:23 +0000 (0:00:00.163) 0:28:16.610 ******** 2026-03-19 05:04:25.022561 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022567 | orchestrator | 2026-03-19 05:04:25.022574 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 05:04:25.022582 | orchestrator | Thursday 19 March 2026 05:04:23 +0000 (0:00:00.151) 0:28:16.762 ******** 2026-03-19 05:04:25.022585 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022589 | orchestrator | 2026-03-19 05:04:25.022593 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 05:04:25.022599 | orchestrator | Thursday 19 March 2026 05:04:23 +0000 (0:00:00.127) 0:28:16.890 ******** 2026-03-19 05:04:25.022605 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-19 05:04:25.022618 | orchestrator | 2026-03-19 05:04:25.022624 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 05:04:25.022631 | orchestrator | Thursday 19 March 2026 05:04:23 +0000 (0:00:00.203) 0:28:17.094 ******** 2026-03-19 05:04:25.022637 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:25.022644 | orchestrator | 2026-03-19 05:04:25.022650 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 05:04:25.022657 | orchestrator | Thursday 19 March 2026 05:04:24 +0000 (0:00:00.735) 0:28:17.829 ******** 2026-03-19 05:04:25.022663 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 05:04:25.022670 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 05:04:25.022676 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 05:04:25.022682 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022685 | orchestrator | 2026-03-19 05:04:25.022690 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 05:04:25.022696 | orchestrator | Thursday 19 March 2026 05:04:24 +0000 (0:00:00.148) 0:28:17.977 ******** 2026-03-19 05:04:25.022703 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022709 | orchestrator | 2026-03-19 05:04:25.022715 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 05:04:25.022721 | orchestrator | Thursday 19 March 2026 05:04:24 +0000 (0:00:00.133) 0:28:18.111 ******** 2026-03-19 05:04:25.022728 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:25.022734 | orchestrator | 2026-03-19 05:04:25.022747 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 05:04:43.191849 | orchestrator | Thursday 19 March 2026 05:04:25 +0000 (0:00:00.163) 0:28:18.274 ******** 2026-03-19 05:04:43.191970 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.191987 | orchestrator | 2026-03-19 05:04:43.191999 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 05:04:43.192011 | orchestrator | Thursday 19 March 2026 05:04:25 +0000 (0:00:00.146) 0:28:18.420 ******** 2026-03-19 05:04:43.192023 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192033 | orchestrator | 2026-03-19 05:04:43.192044 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 05:04:43.192054 | orchestrator | Thursday 19 March 2026 05:04:25 +0000 (0:00:00.149) 0:28:18.570 ******** 2026-03-19 05:04:43.192064 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192075 | orchestrator | 2026-03-19 05:04:43.192086 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 05:04:43.192096 | orchestrator | Thursday 19 March 2026 05:04:25 +0000 (0:00:00.179) 0:28:18.750 ******** 2026-03-19 05:04:43.192106 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:43.192118 | orchestrator | 2026-03-19 05:04:43.192128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 05:04:43.192139 | orchestrator | Thursday 19 March 2026 05:04:27 +0000 (0:00:01.806) 0:28:20.557 ******** 2026-03-19 05:04:43.192149 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:43.192159 | orchestrator | 2026-03-19 05:04:43.192170 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 05:04:43.192180 | orchestrator | Thursday 19 March 2026 05:04:27 +0000 (0:00:00.144) 0:28:20.701 ******** 2026-03-19 05:04:43.192296 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-19 05:04:43.192320 | orchestrator | 2026-03-19 05:04:43.192331 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 05:04:43.192342 | orchestrator | Thursday 19 March 2026 05:04:27 +0000 (0:00:00.219) 0:28:20.921 ******** 2026-03-19 05:04:43.192354 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192365 | orchestrator | 2026-03-19 05:04:43.192376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 05:04:43.192414 | orchestrator | Thursday 19 March 2026 05:04:27 +0000 (0:00:00.151) 0:28:21.073 ******** 2026-03-19 05:04:43.192426 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192436 | orchestrator | 2026-03-19 05:04:43.192447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 05:04:43.192458 | orchestrator | Thursday 19 March 2026 05:04:27 +0000 (0:00:00.174) 0:28:21.247 ******** 2026-03-19 05:04:43.192469 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192479 | orchestrator | 2026-03-19 05:04:43.192489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 05:04:43.192500 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.162) 0:28:21.409 ******** 2026-03-19 05:04:43.192510 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192519 | orchestrator | 2026-03-19 05:04:43.192529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 05:04:43.192540 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.165) 0:28:21.575 ******** 2026-03-19 05:04:43.192550 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192560 | orchestrator | 2026-03-19 05:04:43.192569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 05:04:43.192580 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.167) 0:28:21.742 ******** 2026-03-19 05:04:43.192590 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192601 | orchestrator | 2026-03-19 05:04:43.192611 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 05:04:43.192622 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.169) 0:28:21.912 ******** 2026-03-19 05:04:43.192633 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192643 | orchestrator | 2026-03-19 05:04:43.192653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 05:04:43.192663 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.150) 0:28:22.062 ******** 2026-03-19 05:04:43.192673 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.192684 | orchestrator | 2026-03-19 05:04:43.192696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 05:04:43.192707 | orchestrator | Thursday 19 March 2026 05:04:28 +0000 (0:00:00.159) 0:28:22.221 ******** 2026-03-19 05:04:43.192717 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:04:43.192727 | orchestrator | 2026-03-19 05:04:43.192739 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 05:04:43.192749 | orchestrator | Thursday 19 March 2026 05:04:29 +0000 (0:00:00.638) 0:28:22.860 ******** 2026-03-19 05:04:43.192759 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-19 05:04:43.192770 | orchestrator | 2026-03-19 05:04:43.192781 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 05:04:43.192916 | orchestrator | Thursday 19 March 2026 05:04:29 +0000 (0:00:00.207) 0:28:23.068 ******** 2026-03-19 05:04:43.192934 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-19 05:04:43.192941 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-19 05:04:43.192947 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-19 05:04:43.192953 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-19 05:04:43.192959 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-19 05:04:43.192966 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-19 05:04:43.192972 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-19 05:04:43.192978 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-19 05:04:43.192984 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 05:04:43.192990 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 05:04:43.192997 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 05:04:43.193022 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 05:04:43.193040 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 05:04:43.193046 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 05:04:43.193053 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-19 05:04:43.193059 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-19 05:04:43.193066 | orchestrator | 2026-03-19 05:04:43.193072 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 05:04:43.193078 | orchestrator | Thursday 19 March 2026 05:04:35 +0000 (0:00:05.892) 0:28:28.960 ******** 2026-03-19 05:04:43.193085 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-19 05:04:43.193091 | orchestrator | 2026-03-19 05:04:43.193097 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 05:04:43.193103 | orchestrator | Thursday 19 March 2026 05:04:35 +0000 (0:00:00.240) 0:28:29.201 ******** 2026-03-19 05:04:43.193110 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:04:43.193117 | orchestrator | 2026-03-19 05:04:43.193123 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 05:04:43.193129 | orchestrator | Thursday 19 March 2026 05:04:36 +0000 (0:00:00.511) 0:28:29.712 ******** 2026-03-19 05:04:43.193136 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:04:43.193142 | orchestrator | 2026-03-19 05:04:43.193148 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 05:04:43.193154 | orchestrator | Thursday 19 March 2026 05:04:37 +0000 (0:00:00.982) 0:28:30.695 ******** 2026-03-19 05:04:43.193160 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193167 | orchestrator | 2026-03-19 05:04:43.193173 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 05:04:43.193185 | orchestrator | Thursday 19 March 2026 05:04:37 +0000 (0:00:00.158) 0:28:30.853 ******** 2026-03-19 05:04:43.193191 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193197 | orchestrator | 2026-03-19 05:04:43.193204 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 05:04:43.193210 | orchestrator | Thursday 19 March 2026 05:04:37 +0000 (0:00:00.139) 0:28:30.992 ******** 2026-03-19 05:04:43.193216 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193222 | orchestrator | 2026-03-19 05:04:43.193228 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 05:04:43.193234 | orchestrator | Thursday 19 March 2026 05:04:37 +0000 (0:00:00.162) 0:28:31.154 ******** 2026-03-19 05:04:43.193240 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193247 | orchestrator | 2026-03-19 05:04:43.193253 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 05:04:43.193259 | orchestrator | Thursday 19 March 2026 05:04:38 +0000 (0:00:00.148) 0:28:31.303 ******** 2026-03-19 05:04:43.193265 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193271 | orchestrator | 2026-03-19 05:04:43.193277 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 05:04:43.193284 | orchestrator | Thursday 19 March 2026 05:04:38 +0000 (0:00:00.137) 0:28:31.441 ******** 2026-03-19 05:04:43.193290 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193296 | orchestrator | 2026-03-19 05:04:43.193302 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 05:04:43.193308 | orchestrator | Thursday 19 March 2026 05:04:38 +0000 (0:00:00.454) 0:28:31.895 ******** 2026-03-19 05:04:43.193314 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193321 | orchestrator | 2026-03-19 05:04:43.193327 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 05:04:43.193333 | orchestrator | Thursday 19 March 2026 05:04:38 +0000 (0:00:00.147) 0:28:32.043 ******** 2026-03-19 05:04:43.193344 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193350 | orchestrator | 2026-03-19 05:04:43.193356 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 05:04:43.193363 | orchestrator | Thursday 19 March 2026 05:04:38 +0000 (0:00:00.133) 0:28:32.176 ******** 2026-03-19 05:04:43.193369 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193375 | orchestrator | 2026-03-19 05:04:43.193381 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 05:04:43.193387 | orchestrator | Thursday 19 March 2026 05:04:39 +0000 (0:00:00.155) 0:28:32.332 ******** 2026-03-19 05:04:43.193394 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193400 | orchestrator | 2026-03-19 05:04:43.193406 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 05:04:43.193413 | orchestrator | Thursday 19 March 2026 05:04:39 +0000 (0:00:00.162) 0:28:32.495 ******** 2026-03-19 05:04:43.193419 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:04:43.193425 | orchestrator | 2026-03-19 05:04:43.193431 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 05:04:43.193437 | orchestrator | Thursday 19 March 2026 05:04:39 +0000 (0:00:00.161) 0:28:32.656 ******** 2026-03-19 05:04:43.193443 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:04:43.193450 | orchestrator | 2026-03-19 05:04:43.193456 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 05:04:43.193462 | orchestrator | Thursday 19 March 2026 05:04:42 +0000 (0:00:03.594) 0:28:36.251 ******** 2026-03-19 05:04:43.193468 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:04:43.193474 | orchestrator | 2026-03-19 05:04:43.193485 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 05:05:06.762764 | orchestrator | Thursday 19 March 2026 05:04:43 +0000 (0:00:00.193) 0:28:36.444 ******** 2026-03-19 05:05:06.762939 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-19 05:05:06.762955 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-19 05:05:06.762965 | orchestrator | 2026-03-19 05:05:06.762973 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 05:05:06.762981 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:03.945) 0:28:40.390 ******** 2026-03-19 05:05:06.762988 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.762996 | orchestrator | 2026-03-19 05:05:06.763004 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 05:05:06.763011 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:00.149) 0:28:40.539 ******** 2026-03-19 05:05:06.763018 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763025 | orchestrator | 2026-03-19 05:05:06.763033 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:05:06.763041 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:00.138) 0:28:40.678 ******** 2026-03-19 05:05:06.763048 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763056 | orchestrator | 2026-03-19 05:05:06.763075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:05:06.763083 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:00.158) 0:28:40.836 ******** 2026-03-19 05:05:06.763110 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763118 | orchestrator | 2026-03-19 05:05:06.763126 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:05:06.763133 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:00.155) 0:28:40.991 ******** 2026-03-19 05:05:06.763140 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763147 | orchestrator | 2026-03-19 05:05:06.763155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:05:06.763162 | orchestrator | Thursday 19 March 2026 05:04:47 +0000 (0:00:00.155) 0:28:41.148 ******** 2026-03-19 05:05:06.763169 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:06.763177 | orchestrator | 2026-03-19 05:05:06.763185 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:05:06.763192 | orchestrator | Thursday 19 March 2026 05:04:49 +0000 (0:00:01.159) 0:28:42.307 ******** 2026-03-19 05:05:06.763199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:05:06.763207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:05:06.763214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:05:06.763221 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763228 | orchestrator | 2026-03-19 05:05:06.763235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:05:06.763242 | orchestrator | Thursday 19 March 2026 05:04:49 +0000 (0:00:00.461) 0:28:42.768 ******** 2026-03-19 05:05:06.763250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:05:06.763257 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:05:06.763264 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:05:06.763271 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763278 | orchestrator | 2026-03-19 05:05:06.763285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:05:06.763293 | orchestrator | Thursday 19 March 2026 05:04:49 +0000 (0:00:00.455) 0:28:43.224 ******** 2026-03-19 05:05:06.763300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-19 05:05:06.763307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-19 05:05:06.763314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-19 05:05:06.763321 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763328 | orchestrator | 2026-03-19 05:05:06.763336 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:05:06.763343 | orchestrator | Thursday 19 March 2026 05:04:50 +0000 (0:00:00.493) 0:28:43.718 ******** 2026-03-19 05:05:06.763350 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:06.763357 | orchestrator | 2026-03-19 05:05:06.763364 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:05:06.763372 | orchestrator | Thursday 19 March 2026 05:04:50 +0000 (0:00:00.187) 0:28:43.905 ******** 2026-03-19 05:05:06.763379 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-19 05:05:06.763386 | orchestrator | 2026-03-19 05:05:06.763393 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 05:05:06.763400 | orchestrator | Thursday 19 March 2026 05:04:51 +0000 (0:00:00.453) 0:28:44.358 ******** 2026-03-19 05:05:06.763408 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:06.763415 | orchestrator | 2026-03-19 05:05:06.763422 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-19 05:05:06.763429 | orchestrator | Thursday 19 March 2026 05:04:51 +0000 (0:00:00.858) 0:28:45.217 ******** 2026-03-19 05:05:06.763436 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-19 05:05:06.763443 | orchestrator | 2026-03-19 05:05:06.763464 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:05:06.763472 | orchestrator | Thursday 19 March 2026 05:04:52 +0000 (0:00:00.229) 0:28:45.447 ******** 2026-03-19 05:05:06.763479 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:05:06.763492 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 05:05:06.763499 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:05:06.763507 | orchestrator | 2026-03-19 05:05:06.763514 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:05:06.763521 | orchestrator | Thursday 19 March 2026 05:04:54 +0000 (0:00:02.333) 0:28:47.781 ******** 2026-03-19 05:05:06.763529 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-19 05:05:06.763536 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-19 05:05:06.763543 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:06.763550 | orchestrator | 2026-03-19 05:05:06.763557 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-19 05:05:06.763565 | orchestrator | Thursday 19 March 2026 05:04:55 +0000 (0:00:00.983) 0:28:48.765 ******** 2026-03-19 05:05:06.763572 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763579 | orchestrator | 2026-03-19 05:05:06.763586 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-19 05:05:06.763593 | orchestrator | Thursday 19 March 2026 05:04:55 +0000 (0:00:00.490) 0:28:49.255 ******** 2026-03-19 05:05:06.763601 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-19 05:05:06.763609 | orchestrator | 2026-03-19 05:05:06.763616 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-19 05:05:06.763623 | orchestrator | Thursday 19 March 2026 05:04:56 +0000 (0:00:00.211) 0:28:49.466 ******** 2026-03-19 05:05:06.763630 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:05:06.763639 | orchestrator | 2026-03-19 05:05:06.763646 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-19 05:05:06.763657 | orchestrator | Thursday 19 March 2026 05:04:56 +0000 (0:00:00.711) 0:28:50.177 ******** 2026-03-19 05:05:06.763665 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:05:06.763672 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 05:05:06.763679 | orchestrator | 2026-03-19 05:05:06.763687 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:05:06.763694 | orchestrator | Thursday 19 March 2026 05:05:01 +0000 (0:00:04.325) 0:28:54.502 ******** 2026-03-19 05:05:06.763701 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:05:06.763708 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:05:06.763715 | orchestrator | 2026-03-19 05:05:06.763723 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:05:06.763730 | orchestrator | Thursday 19 March 2026 05:05:03 +0000 (0:00:02.227) 0:28:56.730 ******** 2026-03-19 05:05:06.763737 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-19 05:05:06.763744 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:06.763751 | orchestrator | 2026-03-19 05:05:06.763759 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-19 05:05:06.763766 | orchestrator | Thursday 19 March 2026 05:05:04 +0000 (0:00:01.048) 0:28:57.779 ******** 2026-03-19 05:05:06.763773 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-19 05:05:06.763780 | orchestrator | 2026-03-19 05:05:06.763787 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-19 05:05:06.763795 | orchestrator | Thursday 19 March 2026 05:05:04 +0000 (0:00:00.251) 0:28:58.030 ******** 2026-03-19 05:05:06.763822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763865 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:06.763872 | orchestrator | 2026-03-19 05:05:06.763879 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-19 05:05:06.763886 | orchestrator | Thursday 19 March 2026 05:05:05 +0000 (0:00:00.995) 0:28:59.026 ******** 2026-03-19 05:05:06.763894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:06.763920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:55.885104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:05:55.885214 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:55.885227 | orchestrator | 2026-03-19 05:05:55.885237 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-19 05:05:55.885248 | orchestrator | Thursday 19 March 2026 05:05:06 +0000 (0:00:00.986) 0:29:00.013 ******** 2026-03-19 05:05:55.885258 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:05:55.885269 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:05:55.885278 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:05:55.885287 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:05:55.885298 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:05:55.885307 | orchestrator | 2026-03-19 05:05:55.885316 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-19 05:05:55.885325 | orchestrator | Thursday 19 March 2026 05:05:40 +0000 (0:00:34.236) 0:29:34.250 ******** 2026-03-19 05:05:55.885334 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:55.885344 | orchestrator | 2026-03-19 05:05:55.885367 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-19 05:05:55.885376 | orchestrator | Thursday 19 March 2026 05:05:41 +0000 (0:00:00.132) 0:29:34.383 ******** 2026-03-19 05:05:55.885385 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:05:55.885394 | orchestrator | 2026-03-19 05:05:55.885403 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-19 05:05:55.885412 | orchestrator | Thursday 19 March 2026 05:05:41 +0000 (0:00:00.399) 0:29:34.782 ******** 2026-03-19 05:05:55.885431 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-19 05:05:55.885441 | orchestrator | 2026-03-19 05:05:55.885450 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-19 05:05:55.885481 | orchestrator | Thursday 19 March 2026 05:05:41 +0000 (0:00:00.224) 0:29:35.006 ******** 2026-03-19 05:05:55.885491 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-19 05:05:55.885500 | orchestrator | 2026-03-19 05:05:55.885508 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-19 05:05:55.885518 | orchestrator | Thursday 19 March 2026 05:05:41 +0000 (0:00:00.217) 0:29:35.223 ******** 2026-03-19 05:05:55.885528 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:55.885539 | orchestrator | 2026-03-19 05:05:55.885548 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-19 05:05:55.885558 | orchestrator | Thursday 19 March 2026 05:05:42 +0000 (0:00:01.032) 0:29:36.256 ******** 2026-03-19 05:05:55.885567 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:55.885578 | orchestrator | 2026-03-19 05:05:55.885588 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-19 05:05:55.885598 | orchestrator | Thursday 19 March 2026 05:05:43 +0000 (0:00:00.939) 0:29:37.195 ******** 2026-03-19 05:05:55.885608 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:05:55.885618 | orchestrator | 2026-03-19 05:05:55.885627 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-19 05:05:55.885637 | orchestrator | Thursday 19 March 2026 05:05:45 +0000 (0:00:01.255) 0:29:38.451 ******** 2026-03-19 05:05:55.885647 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-19 05:05:55.885659 | orchestrator | 2026-03-19 05:05:55.885670 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-19 05:05:55.885681 | orchestrator | 2026-03-19 05:05:55.885692 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:05:55.885703 | orchestrator | Thursday 19 March 2026 05:05:47 +0000 (0:00:02.508) 0:29:40.959 ******** 2026-03-19 05:05:55.885713 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-19 05:05:55.885724 | orchestrator | 2026-03-19 05:05:55.885736 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-19 05:05:55.885747 | orchestrator | Thursday 19 March 2026 05:05:47 +0000 (0:00:00.233) 0:29:41.192 ******** 2026-03-19 05:05:55.885757 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.885767 | orchestrator | 2026-03-19 05:05:55.885779 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-19 05:05:55.885790 | orchestrator | Thursday 19 March 2026 05:05:48 +0000 (0:00:00.808) 0:29:42.001 ******** 2026-03-19 05:05:55.885801 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.885810 | orchestrator | 2026-03-19 05:05:55.885840 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:05:55.885852 | orchestrator | Thursday 19 March 2026 05:05:48 +0000 (0:00:00.139) 0:29:42.140 ******** 2026-03-19 05:05:55.885863 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.885875 | orchestrator | 2026-03-19 05:05:55.885886 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:05:55.885896 | orchestrator | Thursday 19 March 2026 05:05:49 +0000 (0:00:00.489) 0:29:42.630 ******** 2026-03-19 05:05:55.885907 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.885918 | orchestrator | 2026-03-19 05:05:55.885947 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-19 05:05:55.885959 | orchestrator | Thursday 19 March 2026 05:05:49 +0000 (0:00:00.146) 0:29:42.776 ******** 2026-03-19 05:05:55.885970 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.885981 | orchestrator | 2026-03-19 05:05:55.885992 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-19 05:05:55.886003 | orchestrator | Thursday 19 March 2026 05:05:49 +0000 (0:00:00.146) 0:29:42.922 ******** 2026-03-19 05:05:55.886013 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.886080 | orchestrator | 2026-03-19 05:05:55.886116 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-19 05:05:55.886137 | orchestrator | Thursday 19 March 2026 05:05:49 +0000 (0:00:00.163) 0:29:43.085 ******** 2026-03-19 05:05:55.886146 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:05:55.886156 | orchestrator | 2026-03-19 05:05:55.886165 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-19 05:05:55.886174 | orchestrator | Thursday 19 March 2026 05:05:49 +0000 (0:00:00.141) 0:29:43.227 ******** 2026-03-19 05:05:55.886182 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.886191 | orchestrator | 2026-03-19 05:05:55.886200 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-19 05:05:55.886209 | orchestrator | Thursday 19 March 2026 05:05:50 +0000 (0:00:00.161) 0:29:43.388 ******** 2026-03-19 05:05:55.886218 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:05:55.886228 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:05:55.886237 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:05:55.886246 | orchestrator | 2026-03-19 05:05:55.886256 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-19 05:05:55.886266 | orchestrator | Thursday 19 March 2026 05:05:51 +0000 (0:00:01.041) 0:29:44.430 ******** 2026-03-19 05:05:55.886276 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:05:55.886286 | orchestrator | 2026-03-19 05:05:55.886303 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-19 05:05:55.886313 | orchestrator | Thursday 19 March 2026 05:05:51 +0000 (0:00:00.260) 0:29:44.691 ******** 2026-03-19 05:05:55.886322 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:05:55.886331 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:05:55.886340 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:05:55.886349 | orchestrator | 2026-03-19 05:05:55.886358 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-19 05:05:55.886367 | orchestrator | Thursday 19 March 2026 05:05:53 +0000 (0:00:02.193) 0:29:46.884 ******** 2026-03-19 05:05:55.886376 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 05:05:55.886385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 05:05:55.886395 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 05:05:55.886402 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:05:55.886407 | orchestrator | 2026-03-19 05:05:55.886412 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-19 05:05:55.886418 | orchestrator | Thursday 19 March 2026 05:05:54 +0000 (0:00:00.774) 0:29:47.659 ******** 2026-03-19 05:05:55.886425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-19 05:05:55.886433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-19 05:05:55.886439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-19 05:05:55.886445 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:05:55.886450 | orchestrator | 2026-03-19 05:05:55.886455 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-19 05:05:55.886461 | orchestrator | Thursday 19 March 2026 05:05:55 +0000 (0:00:00.970) 0:29:48.629 ******** 2026-03-19 05:05:55.886468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:05:55.886494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.086205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.086317 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086336 | orchestrator | 2026-03-19 05:06:00.086350 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-19 05:06:00.086363 | orchestrator | Thursday 19 March 2026 05:05:55 +0000 (0:00:00.508) 0:29:49.138 ******** 2026-03-19 05:06:00.086393 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cfad40490e6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-19 05:05:51.949598', 'end': '2026-03-19 05:05:52.001558', 'delta': '0:00:00.051960', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cfad40490e6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-19 05:06:00.086410 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9403a6c88644', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-19 05:05:52.865687', 'end': '2026-03-19 05:05:52.902579', 'delta': '0:00:00.036892', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9403a6c88644'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-19 05:06:00.086420 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd45e33b5fca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-19 05:05:53.421277', 'end': '2026-03-19 05:05:53.476170', 'delta': '0:00:00.054893', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d45e33b5fca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-19 05:06:00.086431 | orchestrator | 2026-03-19 05:06:00.086443 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-19 05:06:00.086481 | orchestrator | Thursday 19 March 2026 05:05:56 +0000 (0:00:00.203) 0:29:49.342 ******** 2026-03-19 05:06:00.086493 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.086505 | orchestrator | 2026-03-19 05:06:00.086516 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-19 05:06:00.086527 | orchestrator | Thursday 19 March 2026 05:05:56 +0000 (0:00:00.263) 0:29:49.606 ******** 2026-03-19 05:06:00.086537 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086547 | orchestrator | 2026-03-19 05:06:00.086556 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-19 05:06:00.086567 | orchestrator | Thursday 19 March 2026 05:05:56 +0000 (0:00:00.259) 0:29:49.865 ******** 2026-03-19 05:06:00.086577 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.086587 | orchestrator | 2026-03-19 05:06:00.086597 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-19 05:06:00.086607 | orchestrator | Thursday 19 March 2026 05:05:56 +0000 (0:00:00.163) 0:29:50.029 ******** 2026-03-19 05:06:00.086617 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-19 05:06:00.086626 | orchestrator | 2026-03-19 05:06:00.086637 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:06:00.086647 | orchestrator | Thursday 19 March 2026 05:05:57 +0000 (0:00:01.040) 0:29:51.070 ******** 2026-03-19 05:06:00.086657 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.086668 | orchestrator | 2026-03-19 05:06:00.086679 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-19 05:06:00.086690 | orchestrator | Thursday 19 March 2026 05:05:57 +0000 (0:00:00.163) 0:29:51.233 ******** 2026-03-19 05:06:00.086722 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086736 | orchestrator | 2026-03-19 05:06:00.086747 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-19 05:06:00.086758 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.144) 0:29:51.378 ******** 2026-03-19 05:06:00.086768 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086779 | orchestrator | 2026-03-19 05:06:00.086790 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-19 05:06:00.086801 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.265) 0:29:51.643 ******** 2026-03-19 05:06:00.086812 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086855 | orchestrator | 2026-03-19 05:06:00.086866 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-19 05:06:00.086875 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.137) 0:29:51.781 ******** 2026-03-19 05:06:00.086885 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086895 | orchestrator | 2026-03-19 05:06:00.086905 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-19 05:06:00.086914 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.156) 0:29:51.937 ******** 2026-03-19 05:06:00.086924 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.086934 | orchestrator | 2026-03-19 05:06:00.086944 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-19 05:06:00.086955 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.174) 0:29:52.111 ******** 2026-03-19 05:06:00.086964 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.086975 | orchestrator | 2026-03-19 05:06:00.086982 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-19 05:06:00.086988 | orchestrator | Thursday 19 March 2026 05:05:58 +0000 (0:00:00.131) 0:29:52.242 ******** 2026-03-19 05:06:00.086996 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.087006 | orchestrator | 2026-03-19 05:06:00.087017 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-19 05:06:00.087036 | orchestrator | Thursday 19 March 2026 05:05:59 +0000 (0:00:00.558) 0:29:52.801 ******** 2026-03-19 05:06:00.087046 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.087065 | orchestrator | 2026-03-19 05:06:00.087074 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-19 05:06:00.087098 | orchestrator | Thursday 19 March 2026 05:05:59 +0000 (0:00:00.137) 0:29:52.938 ******** 2026-03-19 05:06:00.087108 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:00.087117 | orchestrator | 2026-03-19 05:06:00.087125 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-19 05:06:00.087135 | orchestrator | Thursday 19 March 2026 05:05:59 +0000 (0:00:00.177) 0:29:53.115 ******** 2026-03-19 05:06:00.087146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.087160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}})  2026-03-19 05:06:00.087173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:06:00.087198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}})  2026-03-19 05:06:00.218378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-19 05:06:00.218530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}})  2026-03-19 05:06:00.218589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}})  2026-03-19 05:06:00.218600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-19 05:06:00.218639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-19 05:06:00.218664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-19 05:06:00.457881 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:00.457986 | orchestrator | 2026-03-19 05:06:00.458002 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-19 05:06:00.458067 | orchestrator | Thursday 19 March 2026 05:06:00 +0000 (0:00:00.360) 0:29:53.476 ******** 2026-03-19 05:06:00.458085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba', 'dm-uuid-LVM-prDglspN6lKd0ue3XhWFtlkFrLaA5gfGNlvYb0059lfFXUy6FIUgSpCV0NTwtWzF'], 'uuids': ['33c531bf-8ab8-4e57-8af6-35c4a3abce2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458153 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906', 'scsi-SQEMU_QEMU_HARDDISK_91fa61f2-01b9-4964-86cf-d0da46381906'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91fa61f2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xfRd6A-RzvW-4lGT-wTij-j7ul-ScIf-QpD4l5', 'scsi-0QEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97', 'scsi-SQEMU_QEMU_HARDDISK_6ca08e20-d893-4525-9d75-036a26f1ab97'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-19-01-18-03-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR', 'dm-uuid-CRYPT-LUKS2-fc29cf4d12784bcf8e32c0d5e77e3d04-0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:00.458294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab7a01d4--aa20--5ffe--8eee--b634151ce758-osd--block--ab7a01d4--aa20--5ffe--8eee--b634151ce758', 'dm-uuid-LVM-u99QqeEkbnYS9uybfEYxxuDdX83rcAy50v3AQc3c5rwpKX0JuNrA71l5kO5EjpKR'], 'uuids': ['fc29cf4d-1278-4bcf-8e32-c0d5e77e3d04'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6ca08e20', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0v3AQc-3c5r-wpKX-0JuN-rA71-l5kO-5EjpKR']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.167899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KFzQsy-eB7E-KjiG-PPNx-3jl1-VEzU-f0A400', 'scsi-0QEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff', 'scsi-SQEMU_QEMU_HARDDISK_e6be47e7-14ad-42f7-995f-7ba3ed74c5ff'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6be47e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eb497169--2d92--5217--a604--0fdb844d53ba-osd--block--eb497169--2d92--5217--a604--0fdb844d53ba']}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168017 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'dea79e11', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea79e11-ab75-414a-8bf6-773f9ffc0e77-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168064 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF', 'dm-uuid-CRYPT-LUKS2-33c531bf8ab84e578af635c4a3abce2f-NlvYb0-059l-fFXU-y6FI-UgSp-CV0N-TwtWzF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-19 05:06:04.168103 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:04.168115 | orchestrator | 2026-03-19 05:06:04.168124 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-19 05:06:04.168134 | orchestrator | Thursday 19 March 2026 05:06:00 +0000 (0:00:00.435) 0:29:53.912 ******** 2026-03-19 05:06:04.168143 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:04.168153 | orchestrator | 2026-03-19 05:06:04.168162 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-19 05:06:04.168171 | orchestrator | Thursday 19 March 2026 05:06:01 +0000 (0:00:00.512) 0:29:54.424 ******** 2026-03-19 05:06:04.168179 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:04.168188 | orchestrator | 2026-03-19 05:06:04.168197 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:06:04.168206 | orchestrator | Thursday 19 March 2026 05:06:01 +0000 (0:00:00.141) 0:29:54.565 ******** 2026-03-19 05:06:04.168215 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:04.168224 | orchestrator | 2026-03-19 05:06:04.168232 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:06:04.168241 | orchestrator | Thursday 19 March 2026 05:06:01 +0000 (0:00:00.484) 0:29:55.049 ******** 2026-03-19 05:06:04.168250 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:04.168258 | orchestrator | 2026-03-19 05:06:04.168267 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-19 05:06:04.168276 | orchestrator | Thursday 19 March 2026 05:06:01 +0000 (0:00:00.135) 0:29:55.185 ******** 2026-03-19 05:06:04.168284 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:04.168293 | orchestrator | 2026-03-19 05:06:04.168302 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-19 05:06:04.168310 | orchestrator | Thursday 19 March 2026 05:06:02 +0000 (0:00:00.259) 0:29:55.445 ******** 2026-03-19 05:06:04.168326 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:04.168335 | orchestrator | 2026-03-19 05:06:04.168344 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-19 05:06:04.168352 | orchestrator | Thursday 19 March 2026 05:06:02 +0000 (0:00:00.173) 0:29:55.618 ******** 2026-03-19 05:06:04.168361 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-19 05:06:04.168371 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-19 05:06:04.168379 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-19 05:06:04.168390 | orchestrator | 2026-03-19 05:06:04.168401 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-19 05:06:04.168411 | orchestrator | Thursday 19 March 2026 05:06:03 +0000 (0:00:01.042) 0:29:56.661 ******** 2026-03-19 05:06:04.168422 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-19 05:06:04.168432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-19 05:06:04.168442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-19 05:06:04.168453 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:04.168463 | orchestrator | 2026-03-19 05:06:04.168474 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-19 05:06:04.168485 | orchestrator | Thursday 19 March 2026 05:06:03 +0000 (0:00:00.167) 0:29:56.828 ******** 2026-03-19 05:06:04.168496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-19 05:06:04.168507 | orchestrator | 2026-03-19 05:06:04.168524 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:06:19.788215 | orchestrator | Thursday 19 March 2026 05:06:04 +0000 (0:00:00.595) 0:29:57.424 ******** 2026-03-19 05:06:19.788321 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788337 | orchestrator | 2026-03-19 05:06:19.788350 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:06:19.788361 | orchestrator | Thursday 19 March 2026 05:06:04 +0000 (0:00:00.148) 0:29:57.572 ******** 2026-03-19 05:06:19.788369 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788378 | orchestrator | 2026-03-19 05:06:19.788388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:06:19.788399 | orchestrator | Thursday 19 March 2026 05:06:04 +0000 (0:00:00.157) 0:29:57.730 ******** 2026-03-19 05:06:19.788410 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788420 | orchestrator | 2026-03-19 05:06:19.788431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:06:19.788457 | orchestrator | Thursday 19 March 2026 05:06:04 +0000 (0:00:00.159) 0:29:57.890 ******** 2026-03-19 05:06:19.788467 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.788480 | orchestrator | 2026-03-19 05:06:19.788487 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:06:19.788494 | orchestrator | Thursday 19 March 2026 05:06:04 +0000 (0:00:00.243) 0:29:58.134 ******** 2026-03-19 05:06:19.788500 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:06:19.788507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:06:19.788514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:06:19.788520 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788527 | orchestrator | 2026-03-19 05:06:19.788533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:06:19.788539 | orchestrator | Thursday 19 March 2026 05:06:05 +0000 (0:00:00.439) 0:29:58.573 ******** 2026-03-19 05:06:19.788545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:06:19.788551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:06:19.788557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:06:19.788564 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788587 | orchestrator | 2026-03-19 05:06:19.788594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:06:19.788600 | orchestrator | Thursday 19 March 2026 05:06:05 +0000 (0:00:00.395) 0:29:58.968 ******** 2026-03-19 05:06:19.788606 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:06:19.788613 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:06:19.788619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:06:19.788625 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.788631 | orchestrator | 2026-03-19 05:06:19.788638 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:06:19.788644 | orchestrator | Thursday 19 March 2026 05:06:06 +0000 (0:00:00.413) 0:29:59.381 ******** 2026-03-19 05:06:19.788671 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.788677 | orchestrator | 2026-03-19 05:06:19.788683 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:06:19.788690 | orchestrator | Thursday 19 March 2026 05:06:06 +0000 (0:00:00.164) 0:29:59.545 ******** 2026-03-19 05:06:19.788696 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 05:06:19.788702 | orchestrator | 2026-03-19 05:06:19.788708 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-19 05:06:19.788714 | orchestrator | Thursday 19 March 2026 05:06:06 +0000 (0:00:00.347) 0:29:59.893 ******** 2026-03-19 05:06:19.788721 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:06:19.788728 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:06:19.788734 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:06:19.788740 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:06:19.788746 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 05:06:19.788753 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 05:06:19.788760 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:06:19.788766 | orchestrator | 2026-03-19 05:06:19.788772 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-19 05:06:19.788780 | orchestrator | Thursday 19 March 2026 05:06:07 +0000 (0:00:01.122) 0:30:01.016 ******** 2026-03-19 05:06:19.788787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-19 05:06:19.788794 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-19 05:06:19.788832 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-19 05:06:19.788840 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-19 05:06:19.788848 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-19 05:06:19.788855 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-19 05:06:19.788862 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-19 05:06:19.788869 | orchestrator | 2026-03-19 05:06:19.788876 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-19 05:06:19.788883 | orchestrator | Thursday 19 March 2026 05:06:09 +0000 (0:00:01.635) 0:30:02.651 ******** 2026-03-19 05:06:19.788891 | orchestrator | changed: [testbed-node-5] 2026-03-19 05:06:19.788898 | orchestrator | 2026-03-19 05:06:19.788920 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-19 05:06:19.788929 | orchestrator | Thursday 19 March 2026 05:06:11 +0000 (0:00:02.126) 0:30:04.777 ******** 2026-03-19 05:06:19.788936 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:06:19.788952 | orchestrator | 2026-03-19 05:06:19.788959 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-19 05:06:19.788966 | orchestrator | Thursday 19 March 2026 05:06:13 +0000 (0:00:01.996) 0:30:06.774 ******** 2026-03-19 05:06:19.788973 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:06:19.788981 | orchestrator | 2026-03-19 05:06:19.788988 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 05:06:19.788999 | orchestrator | Thursday 19 March 2026 05:06:14 +0000 (0:00:01.306) 0:30:08.081 ******** 2026-03-19 05:06:19.789007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-19 05:06:19.789014 | orchestrator | 2026-03-19 05:06:19.789021 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 05:06:19.789029 | orchestrator | Thursday 19 March 2026 05:06:15 +0000 (0:00:00.223) 0:30:08.304 ******** 2026-03-19 05:06:19.789036 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-19 05:06:19.789044 | orchestrator | 2026-03-19 05:06:19.789051 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 05:06:19.789058 | orchestrator | Thursday 19 March 2026 05:06:15 +0000 (0:00:00.219) 0:30:08.524 ******** 2026-03-19 05:06:19.789065 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789072 | orchestrator | 2026-03-19 05:06:19.789080 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 05:06:19.789087 | orchestrator | Thursday 19 March 2026 05:06:15 +0000 (0:00:00.132) 0:30:08.657 ******** 2026-03-19 05:06:19.789094 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789101 | orchestrator | 2026-03-19 05:06:19.789108 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 05:06:19.789115 | orchestrator | Thursday 19 March 2026 05:06:15 +0000 (0:00:00.537) 0:30:09.194 ******** 2026-03-19 05:06:19.789123 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789130 | orchestrator | 2026-03-19 05:06:19.789138 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 05:06:19.789144 | orchestrator | Thursday 19 March 2026 05:06:16 +0000 (0:00:00.557) 0:30:09.751 ******** 2026-03-19 05:06:19.789150 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789157 | orchestrator | 2026-03-19 05:06:19.789163 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 05:06:19.789169 | orchestrator | Thursday 19 March 2026 05:06:17 +0000 (0:00:00.542) 0:30:10.294 ******** 2026-03-19 05:06:19.789175 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789182 | orchestrator | 2026-03-19 05:06:19.789188 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 05:06:19.789194 | orchestrator | Thursday 19 March 2026 05:06:17 +0000 (0:00:00.129) 0:30:10.424 ******** 2026-03-19 05:06:19.789200 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789206 | orchestrator | 2026-03-19 05:06:19.789213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 05:06:19.789219 | orchestrator | Thursday 19 March 2026 05:06:17 +0000 (0:00:00.129) 0:30:10.554 ******** 2026-03-19 05:06:19.789225 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789231 | orchestrator | 2026-03-19 05:06:19.789237 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 05:06:19.789243 | orchestrator | Thursday 19 March 2026 05:06:17 +0000 (0:00:00.499) 0:30:11.053 ******** 2026-03-19 05:06:19.789249 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789256 | orchestrator | 2026-03-19 05:06:19.789262 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 05:06:19.789268 | orchestrator | Thursday 19 March 2026 05:06:18 +0000 (0:00:00.536) 0:30:11.590 ******** 2026-03-19 05:06:19.789274 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789280 | orchestrator | 2026-03-19 05:06:19.789286 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 05:06:19.789297 | orchestrator | Thursday 19 March 2026 05:06:18 +0000 (0:00:00.577) 0:30:12.168 ******** 2026-03-19 05:06:19.789303 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789309 | orchestrator | 2026-03-19 05:06:19.789316 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 05:06:19.789322 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.136) 0:30:12.304 ******** 2026-03-19 05:06:19.789328 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789334 | orchestrator | 2026-03-19 05:06:19.789340 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 05:06:19.789346 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.140) 0:30:12.444 ******** 2026-03-19 05:06:19.789353 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789359 | orchestrator | 2026-03-19 05:06:19.789365 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 05:06:19.789371 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.152) 0:30:12.596 ******** 2026-03-19 05:06:19.789377 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789384 | orchestrator | 2026-03-19 05:06:19.789390 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 05:06:19.789396 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.156) 0:30:12.753 ******** 2026-03-19 05:06:19.789402 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:19.789408 | orchestrator | 2026-03-19 05:06:19.789414 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 05:06:19.789421 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.156) 0:30:12.909 ******** 2026-03-19 05:06:19.789427 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:19.789433 | orchestrator | 2026-03-19 05:06:19.789443 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 05:06:31.620357 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.132) 0:30:13.042 ******** 2026-03-19 05:06:31.620482 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620499 | orchestrator | 2026-03-19 05:06:31.620513 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 05:06:31.620524 | orchestrator | Thursday 19 March 2026 05:06:19 +0000 (0:00:00.135) 0:30:13.177 ******** 2026-03-19 05:06:31.620535 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620546 | orchestrator | 2026-03-19 05:06:31.620557 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 05:06:31.620568 | orchestrator | Thursday 19 March 2026 05:06:20 +0000 (0:00:00.159) 0:30:13.337 ******** 2026-03-19 05:06:31.620580 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.620591 | orchestrator | 2026-03-19 05:06:31.620602 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 05:06:31.620629 | orchestrator | Thursday 19 March 2026 05:06:20 +0000 (0:00:00.156) 0:30:13.493 ******** 2026-03-19 05:06:31.620641 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.620651 | orchestrator | 2026-03-19 05:06:31.620662 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-19 05:06:31.620674 | orchestrator | Thursday 19 March 2026 05:06:20 +0000 (0:00:00.573) 0:30:14.067 ******** 2026-03-19 05:06:31.620684 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620695 | orchestrator | 2026-03-19 05:06:31.620706 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-19 05:06:31.620717 | orchestrator | Thursday 19 March 2026 05:06:20 +0000 (0:00:00.140) 0:30:14.208 ******** 2026-03-19 05:06:31.620728 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620739 | orchestrator | 2026-03-19 05:06:31.620750 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-19 05:06:31.620761 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.159) 0:30:14.367 ******** 2026-03-19 05:06:31.620772 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620783 | orchestrator | 2026-03-19 05:06:31.620819 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-19 05:06:31.620854 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.133) 0:30:14.501 ******** 2026-03-19 05:06:31.620867 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620881 | orchestrator | 2026-03-19 05:06:31.620894 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-19 05:06:31.620907 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.134) 0:30:14.635 ******** 2026-03-19 05:06:31.620919 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620932 | orchestrator | 2026-03-19 05:06:31.620945 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-19 05:06:31.620958 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.151) 0:30:14.786 ******** 2026-03-19 05:06:31.620970 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.620982 | orchestrator | 2026-03-19 05:06:31.620995 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-19 05:06:31.621008 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.141) 0:30:14.928 ******** 2026-03-19 05:06:31.621025 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621038 | orchestrator | 2026-03-19 05:06:31.621051 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-19 05:06:31.621064 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.137) 0:30:15.065 ******** 2026-03-19 05:06:31.621077 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621090 | orchestrator | 2026-03-19 05:06:31.621103 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-19 05:06:31.621116 | orchestrator | Thursday 19 March 2026 05:06:21 +0000 (0:00:00.161) 0:30:15.227 ******** 2026-03-19 05:06:31.621129 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621143 | orchestrator | 2026-03-19 05:06:31.621155 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-19 05:06:31.621168 | orchestrator | Thursday 19 March 2026 05:06:22 +0000 (0:00:00.154) 0:30:15.381 ******** 2026-03-19 05:06:31.621181 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621195 | orchestrator | 2026-03-19 05:06:31.621207 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-19 05:06:31.621219 | orchestrator | Thursday 19 March 2026 05:06:22 +0000 (0:00:00.125) 0:30:15.507 ******** 2026-03-19 05:06:31.621230 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621241 | orchestrator | 2026-03-19 05:06:31.621252 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-19 05:06:31.621263 | orchestrator | Thursday 19 March 2026 05:06:22 +0000 (0:00:00.153) 0:30:15.660 ******** 2026-03-19 05:06:31.621274 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621285 | orchestrator | 2026-03-19 05:06:31.621296 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-19 05:06:31.621307 | orchestrator | Thursday 19 March 2026 05:06:22 +0000 (0:00:00.573) 0:30:16.233 ******** 2026-03-19 05:06:31.621317 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.621329 | orchestrator | 2026-03-19 05:06:31.621340 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-19 05:06:31.621350 | orchestrator | Thursday 19 March 2026 05:06:23 +0000 (0:00:00.935) 0:30:17.169 ******** 2026-03-19 05:06:31.621361 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.621372 | orchestrator | 2026-03-19 05:06:31.621383 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-19 05:06:31.621394 | orchestrator | Thursday 19 March 2026 05:06:25 +0000 (0:00:01.255) 0:30:18.424 ******** 2026-03-19 05:06:31.621405 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-19 05:06:31.621418 | orchestrator | 2026-03-19 05:06:31.621429 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-19 05:06:31.621440 | orchestrator | Thursday 19 March 2026 05:06:25 +0000 (0:00:00.211) 0:30:18.636 ******** 2026-03-19 05:06:31.621451 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621462 | orchestrator | 2026-03-19 05:06:31.621481 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-19 05:06:31.621510 | orchestrator | Thursday 19 March 2026 05:06:25 +0000 (0:00:00.133) 0:30:18.770 ******** 2026-03-19 05:06:31.621523 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621534 | orchestrator | 2026-03-19 05:06:31.621545 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-19 05:06:31.621556 | orchestrator | Thursday 19 March 2026 05:06:25 +0000 (0:00:00.129) 0:30:18.899 ******** 2026-03-19 05:06:31.621566 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-19 05:06:31.621577 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-19 05:06:31.621588 | orchestrator | 2026-03-19 05:06:31.621599 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-19 05:06:31.621610 | orchestrator | Thursday 19 March 2026 05:06:26 +0000 (0:00:00.872) 0:30:19.771 ******** 2026-03-19 05:06:31.621621 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.621632 | orchestrator | 2026-03-19 05:06:31.621648 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-19 05:06:31.621659 | orchestrator | Thursday 19 March 2026 05:06:26 +0000 (0:00:00.477) 0:30:20.249 ******** 2026-03-19 05:06:31.621670 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621681 | orchestrator | 2026-03-19 05:06:31.621692 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-19 05:06:31.621703 | orchestrator | Thursday 19 March 2026 05:06:27 +0000 (0:00:00.163) 0:30:20.412 ******** 2026-03-19 05:06:31.621714 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621725 | orchestrator | 2026-03-19 05:06:31.621736 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-19 05:06:31.621747 | orchestrator | Thursday 19 March 2026 05:06:27 +0000 (0:00:00.144) 0:30:20.557 ******** 2026-03-19 05:06:31.621758 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621769 | orchestrator | 2026-03-19 05:06:31.621780 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-19 05:06:31.621790 | orchestrator | Thursday 19 March 2026 05:06:27 +0000 (0:00:00.140) 0:30:20.698 ******** 2026-03-19 05:06:31.621823 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-19 05:06:31.621835 | orchestrator | 2026-03-19 05:06:31.621845 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-19 05:06:31.621856 | orchestrator | Thursday 19 March 2026 05:06:27 +0000 (0:00:00.394) 0:30:21.093 ******** 2026-03-19 05:06:31.621867 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.621878 | orchestrator | 2026-03-19 05:06:31.621889 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-19 05:06:31.621899 | orchestrator | Thursday 19 March 2026 05:06:28 +0000 (0:00:00.728) 0:30:21.821 ******** 2026-03-19 05:06:31.621910 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-19 05:06:31.621921 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-19 05:06:31.621932 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-19 05:06:31.621942 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621953 | orchestrator | 2026-03-19 05:06:31.621964 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-19 05:06:31.621975 | orchestrator | Thursday 19 March 2026 05:06:28 +0000 (0:00:00.156) 0:30:21.977 ******** 2026-03-19 05:06:31.621986 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.621997 | orchestrator | 2026-03-19 05:06:31.622008 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-19 05:06:31.622067 | orchestrator | Thursday 19 March 2026 05:06:28 +0000 (0:00:00.136) 0:30:22.113 ******** 2026-03-19 05:06:31.622079 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.622090 | orchestrator | 2026-03-19 05:06:31.622101 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-19 05:06:31.622121 | orchestrator | Thursday 19 March 2026 05:06:29 +0000 (0:00:00.173) 0:30:22.286 ******** 2026-03-19 05:06:31.622133 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.622144 | orchestrator | 2026-03-19 05:06:31.622155 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-19 05:06:31.622165 | orchestrator | Thursday 19 March 2026 05:06:29 +0000 (0:00:00.151) 0:30:22.438 ******** 2026-03-19 05:06:31.622176 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.622196 | orchestrator | 2026-03-19 05:06:31.622207 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-19 05:06:31.622218 | orchestrator | Thursday 19 March 2026 05:06:29 +0000 (0:00:00.162) 0:30:22.600 ******** 2026-03-19 05:06:31.622229 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.622240 | orchestrator | 2026-03-19 05:06:31.622251 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-19 05:06:31.622262 | orchestrator | Thursday 19 March 2026 05:06:29 +0000 (0:00:00.157) 0:30:22.758 ******** 2026-03-19 05:06:31.622273 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.622283 | orchestrator | 2026-03-19 05:06:31.622294 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-19 05:06:31.622305 | orchestrator | Thursday 19 March 2026 05:06:31 +0000 (0:00:01.589) 0:30:24.348 ******** 2026-03-19 05:06:31.622316 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:31.622327 | orchestrator | 2026-03-19 05:06:31.622337 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-19 05:06:31.622348 | orchestrator | Thursday 19 March 2026 05:06:31 +0000 (0:00:00.148) 0:30:24.496 ******** 2026-03-19 05:06:31.622359 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-19 05:06:31.622370 | orchestrator | 2026-03-19 05:06:31.622381 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-19 05:06:31.622392 | orchestrator | Thursday 19 March 2026 05:06:31 +0000 (0:00:00.209) 0:30:24.705 ******** 2026-03-19 05:06:31.622402 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:31.622414 | orchestrator | 2026-03-19 05:06:31.622424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-19 05:06:31.622442 | orchestrator | Thursday 19 March 2026 05:06:31 +0000 (0:00:00.166) 0:30:24.872 ******** 2026-03-19 05:06:52.084189 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084312 | orchestrator | 2026-03-19 05:06:52.084340 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-19 05:06:52.084362 | orchestrator | Thursday 19 March 2026 05:06:31 +0000 (0:00:00.171) 0:30:25.043 ******** 2026-03-19 05:06:52.084383 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084402 | orchestrator | 2026-03-19 05:06:52.084420 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-19 05:06:52.084438 | orchestrator | Thursday 19 March 2026 05:06:32 +0000 (0:00:00.482) 0:30:25.526 ******** 2026-03-19 05:06:52.084457 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084476 | orchestrator | 2026-03-19 05:06:52.084522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-19 05:06:52.084565 | orchestrator | Thursday 19 March 2026 05:06:32 +0000 (0:00:00.165) 0:30:25.692 ******** 2026-03-19 05:06:52.084585 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084604 | orchestrator | 2026-03-19 05:06:52.084615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-19 05:06:52.084627 | orchestrator | Thursday 19 March 2026 05:06:32 +0000 (0:00:00.160) 0:30:25.852 ******** 2026-03-19 05:06:52.084638 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084649 | orchestrator | 2026-03-19 05:06:52.084660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-19 05:06:52.084671 | orchestrator | Thursday 19 March 2026 05:06:32 +0000 (0:00:00.158) 0:30:26.011 ******** 2026-03-19 05:06:52.084682 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084717 | orchestrator | 2026-03-19 05:06:52.084732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-19 05:06:52.084746 | orchestrator | Thursday 19 March 2026 05:06:32 +0000 (0:00:00.155) 0:30:26.166 ******** 2026-03-19 05:06:52.084759 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.084772 | orchestrator | 2026-03-19 05:06:52.084812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-19 05:06:52.084825 | orchestrator | Thursday 19 March 2026 05:06:33 +0000 (0:00:00.162) 0:30:26.328 ******** 2026-03-19 05:06:52.084838 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:06:52.084853 | orchestrator | 2026-03-19 05:06:52.084867 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-19 05:06:52.084879 | orchestrator | Thursday 19 March 2026 05:06:33 +0000 (0:00:00.230) 0:30:26.558 ******** 2026-03-19 05:06:52.084892 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-19 05:06:52.084906 | orchestrator | 2026-03-19 05:06:52.084919 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-19 05:06:52.084933 | orchestrator | Thursday 19 March 2026 05:06:33 +0000 (0:00:00.207) 0:30:26.766 ******** 2026-03-19 05:06:52.084946 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-19 05:06:52.084960 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-19 05:06:52.084972 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-19 05:06:52.084985 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-19 05:06:52.084998 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-19 05:06:52.085011 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-19 05:06:52.085024 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-19 05:06:52.085037 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-19 05:06:52.085049 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-19 05:06:52.085063 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-19 05:06:52.085076 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-19 05:06:52.085089 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-19 05:06:52.085102 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-19 05:06:52.085113 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-19 05:06:52.085123 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-19 05:06:52.085134 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-19 05:06:52.085145 | orchestrator | 2026-03-19 05:06:52.085156 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-19 05:06:52.085167 | orchestrator | Thursday 19 March 2026 05:06:39 +0000 (0:00:05.785) 0:30:32.551 ******** 2026-03-19 05:06:52.085178 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-19 05:06:52.085189 | orchestrator | 2026-03-19 05:06:52.085200 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-19 05:06:52.085211 | orchestrator | Thursday 19 March 2026 05:06:39 +0000 (0:00:00.225) 0:30:32.777 ******** 2026-03-19 05:06:52.085222 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:06:52.085234 | orchestrator | 2026-03-19 05:06:52.085245 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-19 05:06:52.085255 | orchestrator | Thursday 19 March 2026 05:06:40 +0000 (0:00:00.941) 0:30:33.718 ******** 2026-03-19 05:06:52.085267 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:06:52.085277 | orchestrator | 2026-03-19 05:06:52.085300 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-19 05:06:52.085321 | orchestrator | Thursday 19 March 2026 05:06:41 +0000 (0:00:01.012) 0:30:34.731 ******** 2026-03-19 05:06:52.085332 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085343 | orchestrator | 2026-03-19 05:06:52.085354 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-19 05:06:52.085386 | orchestrator | Thursday 19 March 2026 05:06:41 +0000 (0:00:00.144) 0:30:34.875 ******** 2026-03-19 05:06:52.085398 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085409 | orchestrator | 2026-03-19 05:06:52.085421 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-19 05:06:52.085432 | orchestrator | Thursday 19 March 2026 05:06:41 +0000 (0:00:00.152) 0:30:35.027 ******** 2026-03-19 05:06:52.085443 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085454 | orchestrator | 2026-03-19 05:06:52.085465 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-19 05:06:52.085476 | orchestrator | Thursday 19 March 2026 05:06:41 +0000 (0:00:00.146) 0:30:35.174 ******** 2026-03-19 05:06:52.085487 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085498 | orchestrator | 2026-03-19 05:06:52.085509 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-19 05:06:52.085526 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.175) 0:30:35.350 ******** 2026-03-19 05:06:52.085538 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085549 | orchestrator | 2026-03-19 05:06:52.085560 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-19 05:06:52.085571 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.132) 0:30:35.482 ******** 2026-03-19 05:06:52.085583 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085594 | orchestrator | 2026-03-19 05:06:52.085605 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-19 05:06:52.085616 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.149) 0:30:35.632 ******** 2026-03-19 05:06:52.085627 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085638 | orchestrator | 2026-03-19 05:06:52.085649 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-19 05:06:52.085661 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.144) 0:30:35.777 ******** 2026-03-19 05:06:52.085672 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085683 | orchestrator | 2026-03-19 05:06:52.085694 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-19 05:06:52.085704 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.129) 0:30:35.906 ******** 2026-03-19 05:06:52.085715 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085726 | orchestrator | 2026-03-19 05:06:52.085737 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-19 05:06:52.085748 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.150) 0:30:36.056 ******** 2026-03-19 05:06:52.085759 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085770 | orchestrator | 2026-03-19 05:06:52.085835 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-19 05:06:52.085846 | orchestrator | Thursday 19 March 2026 05:06:42 +0000 (0:00:00.151) 0:30:36.208 ******** 2026-03-19 05:06:52.085858 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.085869 | orchestrator | 2026-03-19 05:06:52.085880 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-19 05:06:52.085894 | orchestrator | Thursday 19 March 2026 05:06:43 +0000 (0:00:00.163) 0:30:36.371 ******** 2026-03-19 05:06:52.085913 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-19 05:06:52.085933 | orchestrator | 2026-03-19 05:06:52.085951 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-19 05:06:52.085968 | orchestrator | Thursday 19 March 2026 05:06:47 +0000 (0:00:04.406) 0:30:40.778 ******** 2026-03-19 05:06:52.085987 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:06:52.086089 | orchestrator | 2026-03-19 05:06:52.086111 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-19 05:06:52.086172 | orchestrator | Thursday 19 March 2026 05:06:47 +0000 (0:00:00.184) 0:30:40.962 ******** 2026-03-19 05:06:52.086196 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-19 05:06:52.086212 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-19 05:06:52.086225 | orchestrator | 2026-03-19 05:06:52.086236 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-19 05:06:52.086247 | orchestrator | Thursday 19 March 2026 05:06:51 +0000 (0:00:03.901) 0:30:44.864 ******** 2026-03-19 05:06:52.086258 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.086269 | orchestrator | 2026-03-19 05:06:52.086280 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-19 05:06:52.086290 | orchestrator | Thursday 19 March 2026 05:06:51 +0000 (0:00:00.139) 0:30:45.003 ******** 2026-03-19 05:06:52.086301 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.086312 | orchestrator | 2026-03-19 05:06:52.086323 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-19 05:06:52.086334 | orchestrator | Thursday 19 March 2026 05:06:51 +0000 (0:00:00.138) 0:30:45.141 ******** 2026-03-19 05:06:52.086345 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:06:52.086356 | orchestrator | 2026-03-19 05:06:52.086367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-19 05:06:52.086392 | orchestrator | Thursday 19 March 2026 05:06:52 +0000 (0:00:00.196) 0:30:45.337 ******** 2026-03-19 05:07:46.282402 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.282526 | orchestrator | 2026-03-19 05:07:46.282545 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-19 05:07:46.282563 | orchestrator | Thursday 19 March 2026 05:06:52 +0000 (0:00:00.183) 0:30:45.521 ******** 2026-03-19 05:07:46.282582 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.282599 | orchestrator | 2026-03-19 05:07:46.282617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-19 05:07:46.282635 | orchestrator | Thursday 19 March 2026 05:06:52 +0000 (0:00:00.167) 0:30:45.688 ******** 2026-03-19 05:07:46.282653 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:07:46.282673 | orchestrator | 2026-03-19 05:07:46.282709 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-19 05:07:46.282729 | orchestrator | Thursday 19 March 2026 05:06:52 +0000 (0:00:00.267) 0:30:45.955 ******** 2026-03-19 05:07:46.282747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:07:46.282766 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:07:46.282819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:07:46.282837 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.282855 | orchestrator | 2026-03-19 05:07:46.282873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-19 05:07:46.282890 | orchestrator | Thursday 19 March 2026 05:06:53 +0000 (0:00:00.407) 0:30:46.362 ******** 2026-03-19 05:07:46.282908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:07:46.282926 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:07:46.282978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:07:46.282998 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.283016 | orchestrator | 2026-03-19 05:07:46.283034 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-19 05:07:46.283052 | orchestrator | Thursday 19 March 2026 05:06:53 +0000 (0:00:00.408) 0:30:46.771 ******** 2026-03-19 05:07:46.283070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-19 05:07:46.283087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-19 05:07:46.283105 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-19 05:07:46.283122 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.283139 | orchestrator | 2026-03-19 05:07:46.283158 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-19 05:07:46.283176 | orchestrator | Thursday 19 March 2026 05:06:54 +0000 (0:00:00.792) 0:30:47.563 ******** 2026-03-19 05:07:46.283195 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:07:46.283213 | orchestrator | 2026-03-19 05:07:46.283231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-19 05:07:46.283249 | orchestrator | Thursday 19 March 2026 05:06:54 +0000 (0:00:00.165) 0:30:47.729 ******** 2026-03-19 05:07:46.283267 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-19 05:07:46.283285 | orchestrator | 2026-03-19 05:07:46.283306 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-19 05:07:46.283325 | orchestrator | Thursday 19 March 2026 05:06:55 +0000 (0:00:01.097) 0:30:48.827 ******** 2026-03-19 05:07:46.283345 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:07:46.283364 | orchestrator | 2026-03-19 05:07:46.283384 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-19 05:07:46.283403 | orchestrator | Thursday 19 March 2026 05:06:56 +0000 (0:00:00.872) 0:30:49.700 ******** 2026-03-19 05:07:46.283423 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-19 05:07:46.283441 | orchestrator | 2026-03-19 05:07:46.283461 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:07:46.283482 | orchestrator | Thursday 19 March 2026 05:06:56 +0000 (0:00:00.215) 0:30:49.915 ******** 2026-03-19 05:07:46.283502 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:07:46.283521 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 05:07:46.283541 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:07:46.283562 | orchestrator | 2026-03-19 05:07:46.283582 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:07:46.283602 | orchestrator | Thursday 19 March 2026 05:06:58 +0000 (0:00:02.298) 0:30:52.214 ******** 2026-03-19 05:07:46.283621 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-19 05:07:46.283642 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-19 05:07:46.283661 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:07:46.283681 | orchestrator | 2026-03-19 05:07:46.283701 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-19 05:07:46.283721 | orchestrator | Thursday 19 March 2026 05:07:00 +0000 (0:00:01.075) 0:30:53.290 ******** 2026-03-19 05:07:46.283740 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.283758 | orchestrator | 2026-03-19 05:07:46.283802 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-19 05:07:46.283823 | orchestrator | Thursday 19 March 2026 05:07:00 +0000 (0:00:00.137) 0:30:53.427 ******** 2026-03-19 05:07:46.283841 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-19 05:07:46.283876 | orchestrator | 2026-03-19 05:07:46.283893 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-19 05:07:46.283911 | orchestrator | Thursday 19 March 2026 05:07:00 +0000 (0:00:00.220) 0:30:53.647 ******** 2026-03-19 05:07:46.283930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:07:46.283967 | orchestrator | 2026-03-19 05:07:46.283985 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-19 05:07:46.284001 | orchestrator | Thursday 19 March 2026 05:07:01 +0000 (0:00:00.639) 0:30:54.287 ******** 2026-03-19 05:07:46.284046 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:07:46.284065 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-19 05:07:46.284083 | orchestrator | 2026-03-19 05:07:46.284100 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-19 05:07:46.284116 | orchestrator | Thursday 19 March 2026 05:07:05 +0000 (0:00:04.665) 0:30:58.953 ******** 2026-03-19 05:07:46.284133 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-19 05:07:46.284161 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-19 05:07:46.284178 | orchestrator | 2026-03-19 05:07:46.284194 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-19 05:07:46.284210 | orchestrator | Thursday 19 March 2026 05:07:08 +0000 (0:00:02.536) 0:31:01.489 ******** 2026-03-19 05:07:46.284227 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-19 05:07:46.284244 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:07:46.284261 | orchestrator | 2026-03-19 05:07:46.284278 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-19 05:07:46.284294 | orchestrator | Thursday 19 March 2026 05:07:09 +0000 (0:00:01.610) 0:31:03.099 ******** 2026-03-19 05:07:46.284311 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-19 05:07:46.284328 | orchestrator | 2026-03-19 05:07:46.284345 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-19 05:07:46.284362 | orchestrator | Thursday 19 March 2026 05:07:10 +0000 (0:00:00.246) 0:31:03.346 ******** 2026-03-19 05:07:46.284378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284465 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.284482 | orchestrator | 2026-03-19 05:07:46.284499 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-19 05:07:46.284517 | orchestrator | Thursday 19 March 2026 05:07:10 +0000 (0:00:00.623) 0:31:03.970 ******** 2026-03-19 05:07:46.284535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-19 05:07:46.284626 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.284656 | orchestrator | 2026-03-19 05:07:46.284674 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-19 05:07:46.284693 | orchestrator | Thursday 19 March 2026 05:07:11 +0000 (0:00:00.594) 0:31:04.564 ******** 2026-03-19 05:07:46.284711 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:07:46.284731 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:07:46.284750 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:07:46.284767 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:07:46.284876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-19 05:07:46.284896 | orchestrator | 2026-03-19 05:07:46.284908 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-19 05:07:46.284919 | orchestrator | Thursday 19 March 2026 05:07:46 +0000 (0:00:34.845) 0:31:39.410 ******** 2026-03-19 05:07:46.284930 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:07:46.284940 | orchestrator | 2026-03-19 05:07:46.284950 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-19 05:07:46.284972 | orchestrator | Thursday 19 March 2026 05:07:46 +0000 (0:00:00.124) 0:31:39.534 ******** 2026-03-19 05:08:13.782993 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.783112 | orchestrator | 2026-03-19 05:08:13.783131 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-19 05:08:13.783144 | orchestrator | Thursday 19 March 2026 05:07:46 +0000 (0:00:00.129) 0:31:39.663 ******** 2026-03-19 05:08:13.783155 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-19 05:08:13.783167 | orchestrator | 2026-03-19 05:08:13.783177 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-19 05:08:13.783187 | orchestrator | Thursday 19 March 2026 05:07:46 +0000 (0:00:00.215) 0:31:39.878 ******** 2026-03-19 05:08:13.783209 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-19 05:08:13.783216 | orchestrator | 2026-03-19 05:08:13.783222 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-19 05:08:13.783228 | orchestrator | Thursday 19 March 2026 05:07:46 +0000 (0:00:00.190) 0:31:40.069 ******** 2026-03-19 05:08:13.783234 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.783240 | orchestrator | 2026-03-19 05:08:13.783246 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-19 05:08:13.783252 | orchestrator | Thursday 19 March 2026 05:07:47 +0000 (0:00:01.073) 0:31:41.142 ******** 2026-03-19 05:08:13.783258 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.783264 | orchestrator | 2026-03-19 05:08:13.783270 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-19 05:08:13.783276 | orchestrator | Thursday 19 March 2026 05:07:49 +0000 (0:00:01.166) 0:31:42.308 ******** 2026-03-19 05:08:13.783282 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.783288 | orchestrator | 2026-03-19 05:08:13.783297 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-19 05:08:13.783307 | orchestrator | Thursday 19 March 2026 05:07:50 +0000 (0:00:01.305) 0:31:43.614 ******** 2026-03-19 05:08:13.783317 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-19 05:08:13.783327 | orchestrator | 2026-03-19 05:08:13.783337 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-19 05:08:13.783370 | orchestrator | skipping: no hosts matched 2026-03-19 05:08:13.783380 | orchestrator | 2026-03-19 05:08:13.783390 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-19 05:08:13.783399 | orchestrator | skipping: no hosts matched 2026-03-19 05:08:13.783408 | orchestrator | 2026-03-19 05:08:13.783417 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-19 05:08:13.783426 | orchestrator | skipping: no hosts matched 2026-03-19 05:08:13.783435 | orchestrator | 2026-03-19 05:08:13.783443 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-19 05:08:13.783452 | orchestrator | 2026-03-19 05:08:13.783461 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-19 05:08:13.783470 | orchestrator | Thursday 19 March 2026 05:07:53 +0000 (0:00:03.541) 0:31:47.156 ******** 2026-03-19 05:08:13.783478 | orchestrator | changed: [testbed-node-0] 2026-03-19 05:08:13.783487 | orchestrator | changed: [testbed-node-1] 2026-03-19 05:08:13.783496 | orchestrator | changed: [testbed-node-3] 2026-03-19 05:08:13.783505 | orchestrator | changed: [testbed-node-2] 2026-03-19 05:08:13.783514 | orchestrator | changed: [testbed-node-4] 2026-03-19 05:08:13.783522 | orchestrator | changed: [testbed-node-5] 2026-03-19 05:08:13.783532 | orchestrator | 2026-03-19 05:08:13.783541 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-19 05:08:13.783552 | orchestrator | Thursday 19 March 2026 05:07:55 +0000 (0:00:01.446) 0:31:48.602 ******** 2026-03-19 05:08:13.783564 | orchestrator | changed: [testbed-node-0] 2026-03-19 05:08:13.783574 | orchestrator | changed: [testbed-node-1] 2026-03-19 05:08:13.783584 | orchestrator | changed: [testbed-node-2] 2026-03-19 05:08:13.783594 | orchestrator | changed: [testbed-node-3] 2026-03-19 05:08:13.783605 | orchestrator | changed: [testbed-node-4] 2026-03-19 05:08:13.783616 | orchestrator | changed: [testbed-node-5] 2026-03-19 05:08:13.783627 | orchestrator | 2026-03-19 05:08:13.783637 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:08:13.783646 | orchestrator | Thursday 19 March 2026 05:07:57 +0000 (0:00:02.594) 0:31:51.196 ******** 2026-03-19 05:08:13.783657 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.783666 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.783676 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.783684 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.783693 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.783702 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.783712 | orchestrator | 2026-03-19 05:08:13.783721 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:08:13.783731 | orchestrator | Thursday 19 March 2026 05:07:59 +0000 (0:00:01.310) 0:31:52.507 ******** 2026-03-19 05:08:13.783740 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.783749 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.783758 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.783767 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.783807 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.783819 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.783829 | orchestrator | 2026-03-19 05:08:13.783840 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-19 05:08:13.783851 | orchestrator | Thursday 19 March 2026 05:08:00 +0000 (0:00:01.374) 0:31:53.882 ******** 2026-03-19 05:08:13.783863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 05:08:13.783876 | orchestrator | 2026-03-19 05:08:13.783886 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-19 05:08:13.783897 | orchestrator | Thursday 19 March 2026 05:08:01 +0000 (0:00:01.042) 0:31:54.924 ******** 2026-03-19 05:08:13.783909 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 05:08:13.783931 | orchestrator | 2026-03-19 05:08:13.783966 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-19 05:08:13.783979 | orchestrator | Thursday 19 March 2026 05:08:03 +0000 (0:00:01.349) 0:31:56.273 ******** 2026-03-19 05:08:13.783990 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.784001 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.784012 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784023 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784034 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784045 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.784055 | orchestrator | 2026-03-19 05:08:13.784066 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-19 05:08:13.784086 | orchestrator | Thursday 19 March 2026 05:08:04 +0000 (0:00:01.084) 0:31:57.357 ******** 2026-03-19 05:08:13.784099 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784110 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784121 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784132 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.784143 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.784154 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.784164 | orchestrator | 2026-03-19 05:08:13.784175 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-19 05:08:13.784186 | orchestrator | Thursday 19 March 2026 05:08:05 +0000 (0:00:01.056) 0:31:58.414 ******** 2026-03-19 05:08:13.784196 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784206 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784216 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784225 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.784235 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.784244 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.784254 | orchestrator | 2026-03-19 05:08:13.784264 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-19 05:08:13.784274 | orchestrator | Thursday 19 March 2026 05:08:06 +0000 (0:00:01.385) 0:31:59.800 ******** 2026-03-19 05:08:13.784282 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784291 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784301 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784310 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.784320 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.784329 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.784339 | orchestrator | 2026-03-19 05:08:13.784348 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-19 05:08:13.784357 | orchestrator | Thursday 19 March 2026 05:08:07 +0000 (0:00:01.068) 0:32:00.868 ******** 2026-03-19 05:08:13.784366 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784375 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784384 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.784393 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.784402 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.784410 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784420 | orchestrator | 2026-03-19 05:08:13.784429 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-19 05:08:13.784438 | orchestrator | Thursday 19 March 2026 05:08:08 +0000 (0:00:00.950) 0:32:01.819 ******** 2026-03-19 05:08:13.784447 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784456 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784464 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784474 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784483 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784492 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784501 | orchestrator | 2026-03-19 05:08:13.784510 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-19 05:08:13.784519 | orchestrator | Thursday 19 March 2026 05:08:09 +0000 (0:00:00.629) 0:32:02.448 ******** 2026-03-19 05:08:13.784528 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784548 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784557 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784565 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784575 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784584 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784592 | orchestrator | 2026-03-19 05:08:13.784602 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-19 05:08:13.784611 | orchestrator | Thursday 19 March 2026 05:08:10 +0000 (0:00:00.892) 0:32:03.341 ******** 2026-03-19 05:08:13.784620 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.784629 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.784638 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.784647 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.784656 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.784665 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.784674 | orchestrator | 2026-03-19 05:08:13.784684 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-19 05:08:13.784694 | orchestrator | Thursday 19 March 2026 05:08:11 +0000 (0:00:01.073) 0:32:04.414 ******** 2026-03-19 05:08:13.784704 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.784714 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.784725 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.784735 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:13.784745 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:13.784755 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:13.784763 | orchestrator | 2026-03-19 05:08:13.784772 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-19 05:08:13.784805 | orchestrator | Thursday 19 March 2026 05:08:12 +0000 (0:00:01.039) 0:32:05.454 ******** 2026-03-19 05:08:13.784815 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:13.784825 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:13.784835 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:13.784845 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784854 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784864 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784873 | orchestrator | 2026-03-19 05:08:13.784883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-19 05:08:13.784891 | orchestrator | Thursday 19 March 2026 05:08:13 +0000 (0:00:00.912) 0:32:06.366 ******** 2026-03-19 05:08:13.784897 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:13.784903 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:13.784909 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:13.784914 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:13.784920 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:13.784926 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:13.784932 | orchestrator | 2026-03-19 05:08:13.784950 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-19 05:08:45.372310 | orchestrator | Thursday 19 March 2026 05:08:13 +0000 (0:00:00.664) 0:32:07.030 ******** 2026-03-19 05:08:45.372429 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.372446 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.372457 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.372467 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.372478 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.372489 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.372499 | orchestrator | 2026-03-19 05:08:45.372511 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-19 05:08:45.372538 | orchestrator | Thursday 19 March 2026 05:08:14 +0000 (0:00:00.975) 0:32:08.005 ******** 2026-03-19 05:08:45.372548 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.372559 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.372569 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.372579 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.372589 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.372624 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.372635 | orchestrator | 2026-03-19 05:08:45.372647 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-19 05:08:45.372657 | orchestrator | Thursday 19 March 2026 05:08:15 +0000 (0:00:00.697) 0:32:08.703 ******** 2026-03-19 05:08:45.372668 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.372679 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.372690 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.372701 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.372713 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.372724 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.372735 | orchestrator | 2026-03-19 05:08:45.372745 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-19 05:08:45.372757 | orchestrator | Thursday 19 March 2026 05:08:16 +0000 (0:00:00.906) 0:32:09.610 ******** 2026-03-19 05:08:45.372824 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.372840 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.372851 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.372863 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.372874 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.372885 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.372897 | orchestrator | 2026-03-19 05:08:45.372908 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-19 05:08:45.372919 | orchestrator | Thursday 19 March 2026 05:08:16 +0000 (0:00:00.637) 0:32:10.247 ******** 2026-03-19 05:08:45.372929 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.372939 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.372949 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.372958 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.372968 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.372977 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.372987 | orchestrator | 2026-03-19 05:08:45.372996 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-19 05:08:45.373006 | orchestrator | Thursday 19 March 2026 05:08:17 +0000 (0:00:00.905) 0:32:11.153 ******** 2026-03-19 05:08:45.373016 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373026 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373036 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373046 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.373056 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.373066 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.373075 | orchestrator | 2026-03-19 05:08:45.373085 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-19 05:08:45.373095 | orchestrator | Thursday 19 March 2026 05:08:18 +0000 (0:00:00.638) 0:32:11.791 ******** 2026-03-19 05:08:45.373105 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373114 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373124 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373134 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.373144 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.373154 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.373165 | orchestrator | 2026-03-19 05:08:45.373176 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-19 05:08:45.373187 | orchestrator | Thursday 19 March 2026 05:08:19 +0000 (0:00:00.629) 0:32:12.421 ******** 2026-03-19 05:08:45.373197 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373206 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373216 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373226 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.373236 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.373246 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.373256 | orchestrator | 2026-03-19 05:08:45.373266 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-19 05:08:45.373276 | orchestrator | Thursday 19 March 2026 05:08:20 +0000 (0:00:01.412) 0:32:13.834 ******** 2026-03-19 05:08:45.373301 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373312 | orchestrator | 2026-03-19 05:08:45.373321 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-19 05:08:45.373328 | orchestrator | Thursday 19 March 2026 05:08:22 +0000 (0:00:02.247) 0:32:16.082 ******** 2026-03-19 05:08:45.373334 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373340 | orchestrator | 2026-03-19 05:08:45.373347 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-19 05:08:45.373353 | orchestrator | Thursday 19 March 2026 05:08:25 +0000 (0:00:02.900) 0:32:18.982 ******** 2026-03-19 05:08:45.373360 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373366 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373372 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373378 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.373384 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.373391 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.373397 | orchestrator | 2026-03-19 05:08:45.373403 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-19 05:08:45.373409 | orchestrator | Thursday 19 March 2026 05:08:27 +0000 (0:00:01.510) 0:32:20.493 ******** 2026-03-19 05:08:45.373415 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373422 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373428 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373434 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.373440 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.373446 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.373452 | orchestrator | 2026-03-19 05:08:45.373459 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-19 05:08:45.373484 | orchestrator | Thursday 19 March 2026 05:08:28 +0000 (0:00:01.016) 0:32:21.510 ******** 2026-03-19 05:08:45.373492 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-19 05:08:45.373500 | orchestrator | 2026-03-19 05:08:45.373506 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-19 05:08:45.373512 | orchestrator | Thursday 19 March 2026 05:08:30 +0000 (0:00:01.789) 0:32:23.299 ******** 2026-03-19 05:08:45.373527 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373533 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373539 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373546 | orchestrator | ok: [testbed-node-3] 2026-03-19 05:08:45.373552 | orchestrator | ok: [testbed-node-4] 2026-03-19 05:08:45.373558 | orchestrator | ok: [testbed-node-5] 2026-03-19 05:08:45.373564 | orchestrator | 2026-03-19 05:08:45.373571 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-19 05:08:45.373577 | orchestrator | Thursday 19 March 2026 05:08:31 +0000 (0:00:01.834) 0:32:25.133 ******** 2026-03-19 05:08:45.373583 | orchestrator | changed: [testbed-node-3] 2026-03-19 05:08:45.373590 | orchestrator | changed: [testbed-node-4] 2026-03-19 05:08:45.373596 | orchestrator | changed: [testbed-node-0] 2026-03-19 05:08:45.373602 | orchestrator | changed: [testbed-node-5] 2026-03-19 05:08:45.373608 | orchestrator | changed: [testbed-node-1] 2026-03-19 05:08:45.373614 | orchestrator | changed: [testbed-node-2] 2026-03-19 05:08:45.373621 | orchestrator | 2026-03-19 05:08:45.373627 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-19 05:08:45.373633 | orchestrator | 2026-03-19 05:08:45.373640 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:08:45.373646 | orchestrator | Thursday 19 March 2026 05:08:35 +0000 (0:00:03.874) 0:32:29.008 ******** 2026-03-19 05:08:45.373652 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373658 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373664 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373671 | orchestrator | 2026-03-19 05:08:45.373677 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:08:45.373683 | orchestrator | Thursday 19 March 2026 05:08:36 +0000 (0:00:00.677) 0:32:29.685 ******** 2026-03-19 05:08:45.373694 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373700 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:08:45.373707 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:08:45.373713 | orchestrator | 2026-03-19 05:08:45.373719 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-19 05:08:45.373727 | orchestrator | Thursday 19 March 2026 05:08:37 +0000 (0:00:00.828) 0:32:30.514 ******** 2026-03-19 05:08:45.373733 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:08:45.373739 | orchestrator | 2026-03-19 05:08:45.373745 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-19 05:08:45.373752 | orchestrator | Thursday 19 March 2026 05:08:38 +0000 (0:00:01.403) 0:32:31.917 ******** 2026-03-19 05:08:45.373758 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.373764 | orchestrator | 2026-03-19 05:08:45.373801 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-19 05:08:45.373811 | orchestrator | 2026-03-19 05:08:45.373817 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-19 05:08:45.373823 | orchestrator | Thursday 19 March 2026 05:08:39 +0000 (0:00:01.097) 0:32:33.015 ******** 2026-03-19 05:08:45.373829 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.373836 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.373842 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.373848 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.373854 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.373861 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.373867 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:08:45.373873 | orchestrator | 2026-03-19 05:08:45.373880 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:08:45.373886 | orchestrator | Thursday 19 March 2026 05:08:40 +0000 (0:00:01.004) 0:32:34.020 ******** 2026-03-19 05:08:45.373892 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.373898 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.373904 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.373910 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.373917 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.373923 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.373929 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:08:45.373936 | orchestrator | 2026-03-19 05:08:45.373942 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-19 05:08:45.373948 | orchestrator | Thursday 19 March 2026 05:08:42 +0000 (0:00:01.479) 0:32:35.500 ******** 2026-03-19 05:08:45.373954 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.373961 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.373967 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.373973 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.373979 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.373985 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.373992 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:08:45.373998 | orchestrator | 2026-03-19 05:08:45.374004 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-19 05:08:45.374010 | orchestrator | Thursday 19 March 2026 05:08:43 +0000 (0:00:01.427) 0:32:36.928 ******** 2026-03-19 05:08:45.374066 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.374074 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.374081 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.374088 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:08:45.374094 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:08:45.374100 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:08:45.374106 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:08:45.374112 | orchestrator | 2026-03-19 05:08:45.374119 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-19 05:08:45.374125 | orchestrator | Thursday 19 March 2026 05:08:44 +0000 (0:00:01.099) 0:32:38.027 ******** 2026-03-19 05:08:45.374140 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:08:45.374147 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:08:45.374153 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:08:45.374165 | orchestrator | skipping: [testbed-node-3] 2026-03-19 05:09:04.706566 | orchestrator | skipping: [testbed-node-4] 2026-03-19 05:09:04.706703 | orchestrator | skipping: [testbed-node-5] 2026-03-19 05:09:04.706720 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.706733 | orchestrator | 2026-03-19 05:09:04.706746 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-19 05:09:04.706871 | orchestrator | 2026-03-19 05:09:04.706902 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-19 05:09:04.706921 | orchestrator | Thursday 19 March 2026 05:08:46 +0000 (0:00:01.773) 0:32:39.800 ******** 2026-03-19 05:09:04.706972 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-19 05:09:04.706990 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-19 05:09:04.707007 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-19 05:09:04.707026 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707044 | orchestrator | 2026-03-19 05:09:04.707063 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-19 05:09:04.707082 | orchestrator | Thursday 19 March 2026 05:08:46 +0000 (0:00:00.345) 0:32:40.146 ******** 2026-03-19 05:09:04.707100 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707117 | orchestrator | 2026-03-19 05:09:04.707134 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-19 05:09:04.707154 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.150) 0:32:40.297 ******** 2026-03-19 05:09:04.707171 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707189 | orchestrator | 2026-03-19 05:09:04.707207 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-19 05:09:04.707226 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.145) 0:32:40.443 ******** 2026-03-19 05:09:04.707245 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707263 | orchestrator | 2026-03-19 05:09:04.707282 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-19 05:09:04.707305 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.133) 0:32:40.576 ******** 2026-03-19 05:09:04.707330 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707348 | orchestrator | 2026-03-19 05:09:04.707365 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-19 05:09:04.707383 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.216) 0:32:40.793 ******** 2026-03-19 05:09:04.707403 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-19 05:09:04.707421 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-19 05:09:04.707440 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707454 | orchestrator | 2026-03-19 05:09:04.707466 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-19 05:09:04.707477 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.142) 0:32:40.936 ******** 2026-03-19 05:09:04.707488 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707499 | orchestrator | 2026-03-19 05:09:04.707510 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-19 05:09:04.707521 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.129) 0:32:41.066 ******** 2026-03-19 05:09:04.707532 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707543 | orchestrator | 2026-03-19 05:09:04.707554 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-19 05:09:04.707565 | orchestrator | Thursday 19 March 2026 05:08:47 +0000 (0:00:00.150) 0:32:41.216 ******** 2026-03-19 05:09:04.707576 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707587 | orchestrator | 2026-03-19 05:09:04.707598 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-19 05:09:04.707636 | orchestrator | Thursday 19 March 2026 05:08:48 +0000 (0:00:00.358) 0:32:41.574 ******** 2026-03-19 05:09:04.707648 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-19 05:09:04.707659 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-19 05:09:04.707670 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707681 | orchestrator | 2026-03-19 05:09:04.707691 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-19 05:09:04.707702 | orchestrator | Thursday 19 March 2026 05:08:48 +0000 (0:00:00.152) 0:32:41.727 ******** 2026-03-19 05:09:04.707713 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707724 | orchestrator | 2026-03-19 05:09:04.707735 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-19 05:09:04.707746 | orchestrator | Thursday 19 March 2026 05:08:48 +0000 (0:00:00.143) 0:32:41.871 ******** 2026-03-19 05:09:04.707757 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707798 | orchestrator | 2026-03-19 05:09:04.707818 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-19 05:09:04.707829 | orchestrator | Thursday 19 March 2026 05:08:48 +0000 (0:00:00.244) 0:32:42.115 ******** 2026-03-19 05:09:04.707840 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707851 | orchestrator | 2026-03-19 05:09:04.707862 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-19 05:09:04.707873 | orchestrator | Thursday 19 March 2026 05:08:48 +0000 (0:00:00.134) 0:32:42.250 ******** 2026-03-19 05:09:04.707884 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:04.707895 | orchestrator | 2026-03-19 05:09:04.707905 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-19 05:09:04.707916 | orchestrator | 2026-03-19 05:09:04.707927 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-19 05:09:04.707938 | orchestrator | Thursday 19 March 2026 05:08:49 +0000 (0:00:00.779) 0:32:43.029 ******** 2026-03-19 05:09:04.707948 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.707959 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.707970 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.707981 | orchestrator | 2026-03-19 05:09:04.707992 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-19 05:09:04.708002 | orchestrator | Thursday 19 March 2026 05:08:50 +0000 (0:00:00.714) 0:32:43.744 ******** 2026-03-19 05:09:04.708013 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708024 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.708058 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.708069 | orchestrator | 2026-03-19 05:09:04.708080 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-19 05:09:04.708091 | orchestrator | Thursday 19 March 2026 05:08:50 +0000 (0:00:00.337) 0:32:44.082 ******** 2026-03-19 05:09:04.708102 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708113 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.708124 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.708135 | orchestrator | 2026-03-19 05:09:04.708146 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-19 05:09:04.708165 | orchestrator | Thursday 19 March 2026 05:08:51 +0000 (0:00:00.308) 0:32:44.390 ******** 2026-03-19 05:09:04.708177 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708187 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.708198 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.708209 | orchestrator | 2026-03-19 05:09:04.708220 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-19 05:09:04.708231 | orchestrator | Thursday 19 March 2026 05:08:51 +0000 (0:00:00.600) 0:32:44.990 ******** 2026-03-19 05:09:04.708242 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708252 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.708263 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.708283 | orchestrator | 2026-03-19 05:09:04.708294 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-19 05:09:04.708305 | orchestrator | Thursday 19 March 2026 05:08:52 +0000 (0:00:00.547) 0:32:45.538 ******** 2026-03-19 05:09:04.708316 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708327 | orchestrator | skipping: [testbed-node-1] 2026-03-19 05:09:04.708338 | orchestrator | skipping: [testbed-node-2] 2026-03-19 05:09:04.708350 | orchestrator | 2026-03-19 05:09:04.708368 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-19 05:09:04.708384 | orchestrator | Thursday 19 March 2026 05:08:52 +0000 (0:00:00.318) 0:32:45.857 ******** 2026-03-19 05:09:04.708400 | orchestrator | skipping: [testbed-node-0] 2026-03-19 05:09:04.708418 | orchestrator | 2026-03-19 05:09:04.708434 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-19 05:09:04.708450 | orchestrator | 2026-03-19 05:09:04.708466 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-19 05:09:04.708482 | orchestrator | Thursday 19 March 2026 05:08:53 +0000 (0:00:01.146) 0:32:47.003 ******** 2026-03-19 05:09:04.708500 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708518 | orchestrator | 2026-03-19 05:09:04.708535 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-19 05:09:04.708552 | orchestrator | Thursday 19 March 2026 05:08:54 +0000 (0:00:00.508) 0:32:47.511 ******** 2026-03-19 05:09:04.708570 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708588 | orchestrator | 2026-03-19 05:09:04.708606 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-19 05:09:04.708626 | orchestrator | Thursday 19 March 2026 05:08:54 +0000 (0:00:00.218) 0:32:47.730 ******** 2026-03-19 05:09:04.708644 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708663 | orchestrator | 2026-03-19 05:09:04.708675 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-19 05:09:04.708685 | orchestrator | Thursday 19 March 2026 05:08:54 +0000 (0:00:00.140) 0:32:47.870 ******** 2026-03-19 05:09:04.708696 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708707 | orchestrator | 2026-03-19 05:09:04.708718 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-19 05:09:04.708729 | orchestrator | Thursday 19 March 2026 05:08:56 +0000 (0:00:02.050) 0:32:49.921 ******** 2026-03-19 05:09:04.708740 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708751 | orchestrator | 2026-03-19 05:09:04.708762 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-19 05:09:04.708805 | orchestrator | Thursday 19 March 2026 05:08:59 +0000 (0:00:02.482) 0:32:52.404 ******** 2026-03-19 05:09:04.708822 | orchestrator | changed: [testbed-node-0] 2026-03-19 05:09:04.708840 | orchestrator | 2026-03-19 05:09:04.708858 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-19 05:09:04.708877 | orchestrator | 2026-03-19 05:09:04.708896 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-19 05:09:04.708916 | orchestrator | Thursday 19 March 2026 05:09:00 +0000 (0:00:01.101) 0:32:53.505 ******** 2026-03-19 05:09:04.708936 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.708955 | orchestrator | ok: [testbed-node-1] 2026-03-19 05:09:04.708973 | orchestrator | ok: [testbed-node-2] 2026-03-19 05:09:04.708992 | orchestrator | 2026-03-19 05:09:04.709013 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-19 05:09:04.709032 | orchestrator | Thursday 19 March 2026 05:09:01 +0000 (0:00:00.807) 0:32:54.312 ******** 2026-03-19 05:09:04.709050 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.709063 | orchestrator | 2026-03-19 05:09:04.709075 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-19 05:09:04.709086 | orchestrator | Thursday 19 March 2026 05:09:02 +0000 (0:00:01.324) 0:32:55.637 ******** 2026-03-19 05:09:04.709096 | orchestrator | ok: [testbed-node-0] 2026-03-19 05:09:04.709107 | orchestrator | 2026-03-19 05:09:04.709118 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 05:09:04.709142 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-19 05:09:04.709156 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-19 05:09:04.709168 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-03-19 05:09:04.709179 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-03-19 05:09:04.709204 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-03-19 05:09:05.331946 | orchestrator | testbed-node-3 : ok=311  changed=21  unreachable=0 failed=0 skipped=341  rescued=0 ignored=0 2026-03-19 05:09:05.332101 | orchestrator | testbed-node-4 : ok=307  changed=17  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-03-19 05:09:05.332129 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-19 05:09:05.332148 | orchestrator | 2026-03-19 05:09:05.332166 | orchestrator | 2026-03-19 05:09:05.332183 | orchestrator | 2026-03-19 05:09:05.332202 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 05:09:05.332221 | orchestrator | Thursday 19 March 2026 05:09:04 +0000 (0:00:02.304) 0:32:57.941 ******** 2026-03-19 05:09:05.332238 | orchestrator | =============================================================================== 2026-03-19 05:09:05.332257 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 79.06s 2026-03-19 05:09:05.332274 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 77.04s 2026-03-19 05:09:05.332293 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 47.77s 2026-03-19 05:09:05.332310 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 34.85s 2026-03-19 05:09:05.332329 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 34.24s 2026-03-19 05:09:05.332347 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.87s 2026-03-19 05:09:05.332364 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.39s 2026-03-19 05:09:05.332382 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 28.66s 2026-03-19 05:09:05.332400 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 26.02s 2026-03-19 05:09:05.332419 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-03-19 05:09:05.332437 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.88s 2026-03-19 05:09:05.332455 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 20.81s 2026-03-19 05:09:05.332474 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 18.81s 2026-03-19 05:09:05.332491 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.95s 2026-03-19 05:09:05.332509 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.91s 2026-03-19 05:09:05.332526 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 12.47s 2026-03-19 05:09:05.332544 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.56s 2026-03-19 05:09:05.332561 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.40s 2026-03-19 05:09:05.332579 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.44s 2026-03-19 05:09:05.332597 | orchestrator | Set cluster configs ---------------------------------------------------- 10.35s 2026-03-19 05:09:05.612002 | orchestrator | + osism apply cephclient 2026-03-19 05:09:07.620364 | orchestrator | 2026-03-19 05:09:07 | INFO  | Task 75b13220-4120-4c15-a2b6-6252eabc78c9 (cephclient) was prepared for execution. 2026-03-19 05:09:07.620475 | orchestrator | 2026-03-19 05:09:07 | INFO  | It takes a moment until task 75b13220-4120-4c15-a2b6-6252eabc78c9 (cephclient) has been started and output is visible here. 2026-03-19 05:09:34.151114 | orchestrator | 2026-03-19 05:09:34.151231 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-19 05:09:34.151247 | orchestrator | 2026-03-19 05:09:34.151257 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-19 05:09:34.151267 | orchestrator | Thursday 19 March 2026 05:09:13 +0000 (0:00:01.922) 0:00:01.922 ******** 2026-03-19 05:09:34.151278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-19 05:09:34.151289 | orchestrator | 2026-03-19 05:09:34.151299 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-19 05:09:34.151309 | orchestrator | Thursday 19 March 2026 05:09:15 +0000 (0:00:01.535) 0:00:03.458 ******** 2026-03-19 05:09:34.151319 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-19 05:09:34.151329 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-19 05:09:34.151339 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-19 05:09:34.151348 | orchestrator | 2026-03-19 05:09:34.151358 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-19 05:09:34.151368 | orchestrator | Thursday 19 March 2026 05:09:17 +0000 (0:00:02.367) 0:00:05.825 ******** 2026-03-19 05:09:34.151378 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-19 05:09:34.151387 | orchestrator | 2026-03-19 05:09:34.151397 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-19 05:09:34.151406 | orchestrator | Thursday 19 March 2026 05:09:19 +0000 (0:00:01.855) 0:00:07.680 ******** 2026-03-19 05:09:34.151416 | orchestrator | ok: [testbed-manager] 2026-03-19 05:09:34.151426 | orchestrator | 2026-03-19 05:09:34.151435 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-19 05:09:34.151445 | orchestrator | Thursday 19 March 2026 05:09:21 +0000 (0:00:01.726) 0:00:09.407 ******** 2026-03-19 05:09:34.151455 | orchestrator | ok: [testbed-manager] 2026-03-19 05:09:34.151465 | orchestrator | 2026-03-19 05:09:34.151474 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-19 05:09:34.151484 | orchestrator | Thursday 19 March 2026 05:09:22 +0000 (0:00:01.687) 0:00:11.095 ******** 2026-03-19 05:09:34.151493 | orchestrator | ok: [testbed-manager] 2026-03-19 05:09:34.151503 | orchestrator | 2026-03-19 05:09:34.151512 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-19 05:09:34.151538 | orchestrator | Thursday 19 March 2026 05:09:25 +0000 (0:00:02.023) 0:00:13.119 ******** 2026-03-19 05:09:34.151548 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-19 05:09:34.151558 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-19 05:09:34.151567 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-19 05:09:34.151577 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-19 05:09:34.151587 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-19 05:09:34.151596 | orchestrator | 2026-03-19 05:09:34.151606 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-19 05:09:34.151615 | orchestrator | Thursday 19 March 2026 05:09:29 +0000 (0:00:04.806) 0:00:17.926 ******** 2026-03-19 05:09:34.151625 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-19 05:09:34.151634 | orchestrator | 2026-03-19 05:09:34.151644 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-19 05:09:34.151656 | orchestrator | Thursday 19 March 2026 05:09:31 +0000 (0:00:01.440) 0:00:19.366 ******** 2026-03-19 05:09:34.151689 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:34.151701 | orchestrator | 2026-03-19 05:09:34.151712 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-19 05:09:34.151724 | orchestrator | Thursday 19 March 2026 05:09:32 +0000 (0:00:01.140) 0:00:20.506 ******** 2026-03-19 05:09:34.151734 | orchestrator | skipping: [testbed-manager] 2026-03-19 05:09:34.151745 | orchestrator | 2026-03-19 05:09:34.151756 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-19 05:09:34.151797 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-19 05:09:34.151810 | orchestrator | 2026-03-19 05:09:34.151821 | orchestrator | 2026-03-19 05:09:34.151832 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-19 05:09:34.151843 | orchestrator | Thursday 19 March 2026 05:09:33 +0000 (0:00:01.488) 0:00:21.995 ******** 2026-03-19 05:09:34.151853 | orchestrator | =============================================================================== 2026-03-19 05:09:34.151866 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.81s 2026-03-19 05:09:34.151881 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.37s 2026-03-19 05:09:34.151898 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.02s 2026-03-19 05:09:34.151914 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.86s 2026-03-19 05:09:34.151931 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.73s 2026-03-19 05:09:34.151948 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.69s 2026-03-19 05:09:34.151963 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.54s 2026-03-19 05:09:34.151979 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.49s 2026-03-19 05:09:34.151995 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.44s 2026-03-19 05:09:34.152011 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.14s 2026-03-19 05:09:34.437817 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-19 05:09:34.437914 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-19 05:09:34.443798 | orchestrator | + set -e 2026-03-19 05:09:34.443888 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-19 05:09:34.443896 | orchestrator | ++ export INTERACTIVE=false 2026-03-19 05:09:34.443901 | orchestrator | ++ INTERACTIVE=false 2026-03-19 05:09:34.443905 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-19 05:09:34.443909 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-19 05:09:34.443914 | orchestrator | + source /opt/manager-vars.sh 2026-03-19 05:09:34.443918 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-19 05:09:34.443922 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-19 05:09:34.443926 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-19 05:09:34.443930 | orchestrator | ++ CEPH_VERSION=reef 2026-03-19 05:09:34.444110 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-19 05:09:34.444121 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-19 05:09:34.444125 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-19 05:09:34.444130 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-19 05:09:34.444134 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-19 05:09:34.444139 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-19 05:09:34.444143 | orchestrator | ++ export ARA=false 2026-03-19 05:09:34.444147 | orchestrator | ++ ARA=false 2026-03-19 05:09:34.444151 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-19 05:09:34.444156 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-19 05:09:34.444163 | orchestrator | ++ export TEMPEST=false 2026-03-19 05:09:34.444169 | orchestrator | ++ TEMPEST=false 2026-03-19 05:09:34.444174 | orchestrator | ++ export IS_ZUUL=true 2026-03-19 05:09:34.444179 | orchestrator | ++ IS_ZUUL=true 2026-03-19 05:09:34.444188 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 05:09:34.444196 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.56 2026-03-19 05:09:34.444202 | orchestrator | ++ export EXTERNAL_API=false 2026-03-19 05:09:34.444209 | orchestrator | ++ EXTERNAL_API=false 2026-03-19 05:09:34.444215 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-19 05:09:34.444222 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-19 05:09:34.444261 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-19 05:09:34.444265 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-19 05:09:34.444269 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-19 05:09:34.444273 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-19 05:09:34.444277 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-19 05:09:34.444280 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-19 05:09:34.444284 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-19 05:09:34.444898 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-19 05:09:34.452660 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-19 05:09:34.452832 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-19 05:09:34.452850 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-19 05:09:34.452861 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-19 05:09:54.237004 | orchestrator | 2026-03-19 05:09:54 | ERROR  | Unable to get ansible vault password 2026-03-19 05:09:54.237120 | orchestrator | 2026-03-19 05:09:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-19 05:09:54.237141 | orchestrator | 2026-03-19 05:09:54 | ERROR  | Dropping encrypted entries 2026-03-19 05:09:54.279179 | orchestrator | 2026-03-19 05:09:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-19 05:09:54.279323 | orchestrator | 2026-03-19 05:09:54 | INFO  | Kolla configuration check passed 2026-03-19 05:09:54.479012 | orchestrator | 2026-03-19 05:09:54 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-19 05:09:54.497145 | orchestrator | 2026-03-19 05:09:54 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-19 05:09:54.707665 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-19 05:10:12.891678 | orchestrator | 2026-03-19 05:10:12 | ERROR  | Unable to get ansible vault password 2026-03-19 05:10:12.891825 | orchestrator | 2026-03-19 05:10:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-19 05:10:12.891850 | orchestrator | 2026-03-19 05:10:12 | ERROR  | Dropping encrypted entries 2026-03-19 05:10:12.921036 | orchestrator | 2026-03-19 05:10:12 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-19 05:10:13.094387 | orchestrator | 2026-03-19 05:10:13 | INFO  | Found 204 classic queue(s) in vhost '/': 2026-03-19 05:10:13.094489 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-19 05:10:13.094496 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-19 05:10:13.094538 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-19 05:10:13.094547 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-19 05:10:13.094729 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican.workers_fanout_2e1a69e40d874c319711d758c967abce (vhost: /, messages: 0) 2026-03-19 05:10:13.094865 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican.workers_fanout_33ac1e4cebe04ca2a7f2cd13b348a524 (vhost: /, messages: 0) 2026-03-19 05:10:13.095681 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican.workers_fanout_661fdfd69b61494bb69536eed9586aa6 (vhost: /, messages: 0) 2026-03-19 05:10:13.095690 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-19 05:10:13.095695 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central (vhost: /, messages: 0) 2026-03-19 05:10:13.095701 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.095726 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.095731 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.095736 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_23678d2284fc45a5851ff5410c38d4c1 (vhost: /, messages: 0) 2026-03-19 05:10:13.095741 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_313625f6da164211b201d292fb0fc1a7 (vhost: /, messages: 0) 2026-03-19 05:10:13.095745 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_4ee7eb5e3dd94f998dfc1ddffb87f057 (vhost: /, messages: 0) 2026-03-19 05:10:13.095750 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_62890495bdd64b38b4c67ef18f397873 (vhost: /, messages: 0) 2026-03-19 05:10:13.095755 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_d43d34500df54766a7d1a5b25dc05b0f (vhost: /, messages: 0) 2026-03-19 05:10:13.095759 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - central_fanout_e243822c138a455fba297f4fc2e13d20 (vhost: /, messages: 0) 2026-03-19 05:10:13.095831 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-19 05:10:13.095837 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.095844 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.095935 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.096085 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup_fanout_5e7e17ef22f7452095ebd9cefe7fb504 (vhost: /, messages: 0) 2026-03-19 05:10:13.096202 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup_fanout_98bf07aee1bf4533a32af1fdb81d5571 (vhost: /, messages: 0) 2026-03-19 05:10:13.096359 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-backup_fanout_f83e995842f749269c08cb945b9e25d8 (vhost: /, messages: 0) 2026-03-19 05:10:13.096459 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-19 05:10:13.096635 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.096794 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.096877 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.097023 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler_fanout_756454db7cf5456c831ee66cff4b79e2 (vhost: /, messages: 0) 2026-03-19 05:10:13.097100 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler_fanout_9ff6052082d744c1a365d9b4004fabc1 (vhost: /, messages: 0) 2026-03-19 05:10:13.097239 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-scheduler_fanout_c10494d175e84300a1816c0ca137c5d8 (vhost: /, messages: 0) 2026-03-19 05:10:13.097357 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-19 05:10:13.097464 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-19 05:10:13.097629 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.097699 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_c40b423c7cf144a7b4bf70f0b30125a3 (vhost: /, messages: 0) 2026-03-19 05:10:13.098131 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-19 05:10:13.098140 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.101425 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_f919177f0ff842b9bfc4f8b4f9403420 (vhost: /, messages: 0) 2026-03-19 05:10:13.101518 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-19 05:10:13.101532 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.101540 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_e12da2c3ab6c43308167a49f4b4452aa (vhost: /, messages: 0) 2026-03-19 05:10:13.101567 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume_fanout_267f6ffa2f0e4afb9b32c5230eba42bb (vhost: /, messages: 0) 2026-03-19 05:10:13.101574 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume_fanout_68e169a842b64ae38f7510a82f973432 (vhost: /, messages: 0) 2026-03-19 05:10:13.101582 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - cinder-volume_fanout_839d4b294c404fa2ada08d508a2f67c6 (vhost: /, messages: 0) 2026-03-19 05:10:13.101590 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-19 05:10:13.101598 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-19 05:10:13.101605 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-19 05:10:13.101652 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-19 05:10:13.101664 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute_fanout_028f69bb15d4499390a49aeb682473d0 (vhost: /, messages: 0) 2026-03-19 05:10:13.101670 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute_fanout_5b0fbb02e9224b379dea5166fa356d80 (vhost: /, messages: 0) 2026-03-19 05:10:13.101677 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - compute_fanout_e20602a1312d4e81b04e6c62aa236630 (vhost: /, messages: 0) 2026-03-19 05:10:13.101684 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-19 05:10:13.101691 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.101698 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.101705 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.101712 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_34b131da056f43e68d4494b33341ee73 (vhost: /, messages: 0) 2026-03-19 05:10:13.101730 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_4983d0e50b534739ad9b3e58d2974870 (vhost: /, messages: 0) 2026-03-19 05:10:13.101737 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_5c417e0d47f84dbc945898ac35445926 (vhost: /, messages: 0) 2026-03-19 05:10:13.101744 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_77c940552ad94441994238743c370649 (vhost: /, messages: 0) 2026-03-19 05:10:13.101752 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_c6e9feea139544f2a99cd11f28a840dd (vhost: /, messages: 0) 2026-03-19 05:10:13.101759 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - conductor_fanout_e1969cb5108b47679fdcd22c3109e6a9 (vhost: /, messages: 0) 2026-03-19 05:10:13.101802 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - event.sample (vhost: /, messages: 3) 2026-03-19 05:10:13.101858 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-19 05:10:13.101864 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor.puikv3y3ctce (vhost: /, messages: 0) 2026-03-19 05:10:13.101868 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor.rskax72jgm5d (vhost: /, messages: 0) 2026-03-19 05:10:13.101874 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor.to2mz57zt2yb (vhost: /, messages: 0) 2026-03-19 05:10:13.101878 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_2d2b4ddd50ec4d718fe17c168c366bf2 (vhost: /, messages: 0) 2026-03-19 05:10:13.101883 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_378ddd99fb80421d86015624a7a3a1d0 (vhost: /, messages: 0) 2026-03-19 05:10:13.101887 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_43c52dbee62643758a895542631de34a (vhost: /, messages: 0) 2026-03-19 05:10:13.101906 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_762b379204c0461bada2a3ef058a44cf (vhost: /, messages: 0) 2026-03-19 05:10:13.101913 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_7e3c41b14f3f4eab8268fa1252963494 (vhost: /, messages: 0) 2026-03-19 05:10:13.101920 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_8267070e1d3e4ace8ea1bd1903dc3ec0 (vhost: /, messages: 0) 2026-03-19 05:10:13.101969 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_a132f4ecbe82407db278212d03a11891 (vhost: /, messages: 0) 2026-03-19 05:10:13.101978 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_badd997f4a4a498fa71dc9be7717cec4 (vhost: /, messages: 0) 2026-03-19 05:10:13.101985 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - magnum-conductor_fanout_d4ed630a090341b9a2e90481c076836a (vhost: /, messages: 0) 2026-03-19 05:10:13.101992 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-19 05:10:13.102000 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.102007 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102050 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.102059 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data_fanout_6d122fc4df464ec8aa084ff484bd1134 (vhost: /, messages: 0) 2026-03-19 05:10:13.102065 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data_fanout_ccff6137659b4d00a10558436882995d (vhost: /, messages: 0) 2026-03-19 05:10:13.102071 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-data_fanout_e45f7933f3e74c54ac2529a191f035f7 (vhost: /, messages: 0) 2026-03-19 05:10:13.102078 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-19 05:10:13.102140 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.102146 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102150 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.102155 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler_fanout_15e06d9abcdf4810bdbda9a34d14108c (vhost: /, messages: 0) 2026-03-19 05:10:13.102169 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler_fanout_8950b2a097154b5f86a47f193d298f97 (vhost: /, messages: 0) 2026-03-19 05:10:13.102179 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-scheduler_fanout_e382f446ba1c43eda334f6abd4f999fe (vhost: /, messages: 0) 2026-03-19 05:10:13.102183 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-19 05:10:13.102188 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102192 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102196 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102200 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - manila-share_fanout_752668025ffb47bfa7040f72a11a4e40 (vhost: /, messages: 0) 2026-03-19 05:10:13.102241 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-19 05:10:13.102245 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-19 05:10:13.102250 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-19 05:10:13.102254 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-19 05:10:13.102258 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-19 05:10:13.102263 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-19 05:10:13.102267 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-19 05:10:13.102271 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-19 05:10:13.102287 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.102291 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.102296 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.102300 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2_fanout_be73a14a3cf54aac95ba4019f9daaffe (vhost: /, messages: 0) 2026-03-19 05:10:13.102306 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2_fanout_db6a3c53b6dc4628bf0386da497f864a (vhost: /, messages: 0) 2026-03-19 05:10:13.102311 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - octavia_provisioning_v2_fanout_ec9cda2afeff4a28965156ae6e5de994 (vhost: /, messages: 0) 2026-03-19 05:10:13.102715 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-19 05:10:13.102724 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.102939 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.103089 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.103233 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_108856901ea14d0aadf9515f25a1fec2 (vhost: /, messages: 0) 2026-03-19 05:10:13.103421 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_2622ec3d382a4f30a15dc1b687ed1171 (vhost: /, messages: 0) 2026-03-19 05:10:13.103557 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_31924efc263a43f6a2c3ae1a66e7c0c7 (vhost: /, messages: 0) 2026-03-19 05:10:13.103682 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_4576516d24a94c8aa6da2dcd399419fc (vhost: /, messages: 0) 2026-03-19 05:10:13.103854 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_5d5c481e8435453eb5ed1fc4d7b4a8a9 (vhost: /, messages: 0) 2026-03-19 05:10:13.103943 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - producer_fanout_aacbc6bb0ced4378ae97c87546f3bd79 (vhost: /, messages: 0) 2026-03-19 05:10:13.104106 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-19 05:10:13.104219 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.104431 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.104519 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.104622 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_1f8efbe6fa43455e8836b3f9d543d48a (vhost: /, messages: 0) 2026-03-19 05:10:13.104782 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_281f9d95986d46aba9af611aacfc4871 (vhost: /, messages: 0) 2026-03-19 05:10:13.104904 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_32f94ef014e040699b6effc5a00bd73c (vhost: /, messages: 0) 2026-03-19 05:10:13.105027 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_36735ba460504caf967fd6e01d06939c (vhost: /, messages: 0) 2026-03-19 05:10:13.105165 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_400269f6cb584eda9ecb204a518321e8 (vhost: /, messages: 0) 2026-03-19 05:10:13.105312 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_6319f4c0e0d94cc1918dcf96ed326d34 (vhost: /, messages: 0) 2026-03-19 05:10:13.105416 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_75643f298c9e414b89be27f77e08e1af (vhost: /, messages: 0) 2026-03-19 05:10:13.105562 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_87061bf878fc499ab732dcd0d828393f (vhost: /, messages: 0) 2026-03-19 05:10:13.105662 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-plugin_fanout_e07b34970bad4c75ba618e9d06152098 (vhost: /, messages: 0) 2026-03-19 05:10:13.105834 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-19 05:10:13.105999 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.106138 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.106187 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.106347 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_121fb8894fcc4a38ad9f6a7cb41bd811 (vhost: /, messages: 0) 2026-03-19 05:10:13.106448 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_27aca2ba97a44b4dacd76489b85d648a (vhost: /, messages: 0) 2026-03-19 05:10:13.106612 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_2cee935c2f564456b6d56752af0f9693 (vhost: /, messages: 0) 2026-03-19 05:10:13.106694 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_2e43a4ef3c9f4335a8c89949be945bbb (vhost: /, messages: 0) 2026-03-19 05:10:13.107169 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_3f93ef2dfba64081bfda3eed618768c5 (vhost: /, messages: 0) 2026-03-19 05:10:13.107305 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_49b3d8630a9443338e39f322643e3ccb (vhost: /, messages: 0) 2026-03-19 05:10:13.107366 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_5de7a987ab1846a381d416d048a9401a (vhost: /, messages: 0) 2026-03-19 05:10:13.107382 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_5e5b7ef5d8c1416291cef8f804ac2cda (vhost: /, messages: 0) 2026-03-19 05:10:13.107388 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_82ba2ae1803a44c2b69ee90627a77486 (vhost: /, messages: 0) 2026-03-19 05:10:13.107515 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_83280d69e5b847eabee8b66d9f6ef63a (vhost: /, messages: 0) 2026-03-19 05:10:13.107657 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_920f0a6d88944564816e81ba7bcedf6c (vhost: /, messages: 0) 2026-03-19 05:10:13.107819 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_a28d3ba04f0a4ecdb836d98aa7a4ce7d (vhost: /, messages: 0) 2026-03-19 05:10:13.107901 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_bee454b2881840fcbcea9b9532d3e9ad (vhost: /, messages: 0) 2026-03-19 05:10:13.108025 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_dada3199f7114ab08da3084ea86f4b76 (vhost: /, messages: 0) 2026-03-19 05:10:13.108169 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_e6372e5d597c4d1897dc2eb56fbdfe25 (vhost: /, messages: 0) 2026-03-19 05:10:13.108282 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_e97da082704c457ebcd36f7c42ea4ee2 (vhost: /, messages: 0) 2026-03-19 05:10:13.108409 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_f7c6f5c9e17f4ca29d2fa4cfcb848e78 (vhost: /, messages: 0) 2026-03-19 05:10:13.108550 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-reports-plugin_fanout_f9892be5b5184a878013087503642631 (vhost: /, messages: 0) 2026-03-19 05:10:13.108646 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-19 05:10:13.108838 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.109011 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.109022 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.109269 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_46893d355779492c914d9cb4c3270fda (vhost: /, messages: 0) 2026-03-19 05:10:13.109281 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_50733c8dca7a402292aba8aece039404 (vhost: /, messages: 0) 2026-03-19 05:10:13.109287 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_54fa406161204425b7e994ae531bd652 (vhost: /, messages: 0) 2026-03-19 05:10:13.109439 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_613cc112f0824b7bb1c045c61da7f1f6 (vhost: /, messages: 0) 2026-03-19 05:10:13.109516 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_73e2f1d5861c45c5b7ac4f71d8b88c7f (vhost: /, messages: 0) 2026-03-19 05:10:13.109631 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_753ce5ccadf74b5897da9368b2d7869d (vhost: /, messages: 0) 2026-03-19 05:10:13.109866 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_7d63dd7f55de4cf6a4c112f75999cb2b (vhost: /, messages: 0) 2026-03-19 05:10:13.110011 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_ddc5a2de86a14b28a705c3375f62ec91 (vhost: /, messages: 0) 2026-03-19 05:10:13.110071 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - q-server-resource-versions_fanout_ec1c33991ed148f3aadf484bcd699c48 (vhost: /, messages: 0) 2026-03-19 05:10:13.110184 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_0825fdcc59dd4b509461a2b579181e9f (vhost: /, messages: 0) 2026-03-19 05:10:13.110301 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_11b050e990bd4734ab3b338c4ae49ac4 (vhost: /, messages: 0) 2026-03-19 05:10:13.110464 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_23db9592369745109370fac07b938d2c (vhost: /, messages: 0) 2026-03-19 05:10:13.110578 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_291e2c399b2046069b3126d04c11ae65 (vhost: /, messages: 0) 2026-03-19 05:10:13.110688 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_3a57dc8e43a240359507307cd7569512 (vhost: /, messages: 0) 2026-03-19 05:10:13.110829 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_3ca6bf4ce28043d7bcc5fb87a00e2958 (vhost: /, messages: 0) 2026-03-19 05:10:13.111000 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_4c865f8e4e8448e18e04a28128353c7b (vhost: /, messages: 1) 2026-03-19 05:10:13.111059 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_67e00d5815c942608dd7c25d6ba5a04e (vhost: /, messages: 0) 2026-03-19 05:10:13.111199 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_6daf026615ee42c4850e8d8abccf9b6a (vhost: /, messages: 0) 2026-03-19 05:10:13.111298 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_80536b639a9f45cc8e7e6fcc761b88d2 (vhost: /, messages: 0) 2026-03-19 05:10:13.111488 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_8ec360c79daa4c57b32e71bd08ff6760 (vhost: /, messages: 0) 2026-03-19 05:10:13.111575 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_a2c2ae17559c401e988e9d0d19d82698 (vhost: /, messages: 0) 2026-03-19 05:10:13.111697 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_b49b735a244a40198a45826eda6649e7 (vhost: /, messages: 0) 2026-03-19 05:10:13.111836 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_b607845cc4284cc2a2298a180170beda (vhost: /, messages: 0) 2026-03-19 05:10:13.112335 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_e65a2933685e42c2aeb400051a3e9a70 (vhost: /, messages: 0) 2026-03-19 05:10:13.112348 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_e7c576c1d06a4fd9b667818359433c0a (vhost: /, messages: 0) 2026-03-19 05:10:13.112361 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - reply_f0f0bc32f0084400bbce16f19b15e763 (vhost: /, messages: 0) 2026-03-19 05:10:13.112368 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-19 05:10:13.112373 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.112534 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.112551 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.112559 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_567d30d11e494b07b3e68e459bfea57e (vhost: /, messages: 0) 2026-03-19 05:10:13.112569 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_6f35b8b6c2524792880afcaa316b8196 (vhost: /, messages: 0) 2026-03-19 05:10:13.112737 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_77d47c6f32b74d0abf959e565529bfdf (vhost: /, messages: 0) 2026-03-19 05:10:13.112878 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_c83eefb9c39a42e4ab47616750d73a8c (vhost: /, messages: 0) 2026-03-19 05:10:13.112899 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_cf3b8126468742e1893504fe528485b9 (vhost: /, messages: 0) 2026-03-19 05:10:13.113283 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - scheduler_fanout_e6c488e01cd546eda388754900c6f195 (vhost: /, messages: 0) 2026-03-19 05:10:13.113364 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-19 05:10:13.113371 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-19 05:10:13.113381 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-19 05:10:13.113386 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-19 05:10:13.113525 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_074a6002765d4f9292b8897f54699f11 (vhost: /, messages: 0) 2026-03-19 05:10:13.113621 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_54a277c22f0e4920ba40c900663dcca6 (vhost: /, messages: 0) 2026-03-19 05:10:13.113845 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_6a7e69ecdd1e4607b11a472ae6245c2c (vhost: /, messages: 0) 2026-03-19 05:10:13.113862 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_a03c27f197314e4dae7c664ee35f8b82 (vhost: /, messages: 0) 2026-03-19 05:10:13.114093 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_d6bdaa11a5a7472490b43628cc186049 (vhost: /, messages: 0) 2026-03-19 05:10:13.114122 | orchestrator | 2026-03-19 05:10:13 | INFO  |  - worker_fanout_edf9b1687a9d4ff6bd0eb6f3ad21bbae (vhost: /, messages: 0) 2026-03-19 05:10:13.297483 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-19 05:10:15.025803 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-19 05:10:15.025893 | orchestrator | [--no-close-connections] [--quorum] 2026-03-19 05:10:15.025905 | orchestrator | [--vhost VHOST] 2026-03-19 05:10:15.025914 | orchestrator | [{list,delete,prepare,check}] 2026-03-19 05:10:15.025923 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-19 05:10:15.025933 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-19 05:10:15.685829 | orchestrator | ERROR 2026-03-19 05:10:15.686042 | orchestrator | { 2026-03-19 05:10:15.686078 | orchestrator | "delta": "1:21:24.119377", 2026-03-19 05:10:15.686102 | orchestrator | "end": "2026-03-19 05:10:15.256717", 2026-03-19 05:10:15.686123 | orchestrator | "msg": "non-zero return code", 2026-03-19 05:10:15.686142 | orchestrator | "rc": 2, 2026-03-19 05:10:15.686160 | orchestrator | "start": "2026-03-19 03:48:51.137340" 2026-03-19 05:10:15.686178 | orchestrator | } failure 2026-03-19 05:10:15.974540 | 2026-03-19 05:10:15.974773 | PLAY RECAP 2026-03-19 05:10:15.974931 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-19 05:10:15.974990 | 2026-03-19 05:10:16.243353 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-19 05:10:16.244456 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-19 05:10:17.063978 | 2026-03-19 05:10:17.064149 | PLAY [Post output play] 2026-03-19 05:10:17.081560 | 2026-03-19 05:10:17.081697 | LOOP [stage-output : Register sources] 2026-03-19 05:10:17.142933 | 2026-03-19 05:10:17.143316 | TASK [stage-output : Check sudo] 2026-03-19 05:10:18.022312 | orchestrator | sudo: a password is required 2026-03-19 05:10:18.187555 | orchestrator | ok: Runtime: 0:00:00.018531 2026-03-19 05:10:18.201423 | 2026-03-19 05:10:18.201577 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-19 05:10:18.243757 | 2026-03-19 05:10:18.244093 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-19 05:10:18.324268 | orchestrator | ok 2026-03-19 05:10:18.334947 | 2026-03-19 05:10:18.335101 | LOOP [stage-output : Ensure target folders exist] 2026-03-19 05:10:18.797102 | orchestrator | ok: "docs" 2026-03-19 05:10:18.797457 | 2026-03-19 05:10:19.048827 | orchestrator | ok: "artifacts" 2026-03-19 05:10:19.316168 | orchestrator | ok: "logs" 2026-03-19 05:10:19.329946 | 2026-03-19 05:10:19.330115 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-19 05:10:19.367685 | 2026-03-19 05:10:19.367981 | TASK [stage-output : Make all log files readable] 2026-03-19 05:10:19.667724 | orchestrator | ok 2026-03-19 05:10:19.677279 | 2026-03-19 05:10:19.677445 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-19 05:10:19.712597 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:19.724060 | 2026-03-19 05:10:19.724194 | TASK [stage-output : Discover log files for compression] 2026-03-19 05:10:19.748762 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:19.764969 | 2026-03-19 05:10:19.765129 | LOOP [stage-output : Archive everything from logs] 2026-03-19 05:10:19.804493 | 2026-03-19 05:10:19.804638 | PLAY [Post cleanup play] 2026-03-19 05:10:19.812897 | 2026-03-19 05:10:19.812999 | TASK [Set cloud fact (Zuul deployment)] 2026-03-19 05:10:19.870296 | orchestrator | ok 2026-03-19 05:10:19.881202 | 2026-03-19 05:10:19.881315 | TASK [Set cloud fact (local deployment)] 2026-03-19 05:10:19.915478 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:19.932171 | 2026-03-19 05:10:19.932328 | TASK [Clean the cloud environment] 2026-03-19 05:10:20.592652 | orchestrator | 2026-03-19 05:10:20 - clean up servers 2026-03-19 05:10:21.395440 | orchestrator | 2026-03-19 05:10:21 - testbed-manager 2026-03-19 05:10:21.497744 | orchestrator | 2026-03-19 05:10:21 - testbed-node-3 2026-03-19 05:10:21.597788 | orchestrator | 2026-03-19 05:10:21 - testbed-node-2 2026-03-19 05:10:21.696068 | orchestrator | 2026-03-19 05:10:21 - testbed-node-0 2026-03-19 05:10:21.783819 | orchestrator | 2026-03-19 05:10:21 - testbed-node-1 2026-03-19 05:10:21.919845 | orchestrator | 2026-03-19 05:10:21 - testbed-node-4 2026-03-19 05:10:22.024729 | orchestrator | 2026-03-19 05:10:22 - testbed-node-5 2026-03-19 05:10:22.114819 | orchestrator | 2026-03-19 05:10:22 - clean up keypairs 2026-03-19 05:10:22.139603 | orchestrator | 2026-03-19 05:10:22 - testbed 2026-03-19 05:10:22.165787 | orchestrator | 2026-03-19 05:10:22 - wait for servers to be gone 2026-03-19 05:10:35.370819 | orchestrator | 2026-03-19 05:10:35 - clean up ports 2026-03-19 05:10:36.105456 | orchestrator | 2026-03-19 05:10:36 - 25b4c106-9819-40e7-bd9d-2cc07c002b6a 2026-03-19 05:10:36.414388 | orchestrator | 2026-03-19 05:10:36 - 4a6b2030-4445-4405-83af-a00ae406896a 2026-03-19 05:10:36.948359 | orchestrator | 2026-03-19 05:10:36 - 64446432-0e9a-41a9-b8e8-d611f4571805 2026-03-19 05:10:37.237241 | orchestrator | 2026-03-19 05:10:37 - 85875a76-0955-4a97-ba0e-17a0e71b656f 2026-03-19 05:10:37.497681 | orchestrator | 2026-03-19 05:10:37 - 9ce12750-9ab2-4f22-8028-02cf815edc36 2026-03-19 05:10:37.722263 | orchestrator | 2026-03-19 05:10:37 - 9f1ad232-b8a0-4e3a-8595-52fc09da3bcf 2026-03-19 05:10:37.966162 | orchestrator | 2026-03-19 05:10:37 - f5265f94-087e-496e-9a7a-c0112583624c 2026-03-19 05:10:38.218434 | orchestrator | 2026-03-19 05:10:38 - clean up volumes 2026-03-19 05:10:38.364878 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-3-node-base 2026-03-19 05:10:38.408063 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-5-node-base 2026-03-19 05:10:38.455015 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-1-node-base 2026-03-19 05:10:38.496551 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-2-node-base 2026-03-19 05:10:38.538111 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-4-node-base 2026-03-19 05:10:38.582166 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-0-node-base 2026-03-19 05:10:38.629201 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-2-node-5 2026-03-19 05:10:38.671732 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-manager-base 2026-03-19 05:10:38.721676 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-5-node-5 2026-03-19 05:10:38.765394 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-3-node-3 2026-03-19 05:10:38.812701 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-1-node-4 2026-03-19 05:10:38.857584 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-4-node-4 2026-03-19 05:10:38.903816 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-8-node-5 2026-03-19 05:10:38.948650 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-6-node-3 2026-03-19 05:10:38.994855 | orchestrator | 2026-03-19 05:10:38 - testbed-volume-0-node-3 2026-03-19 05:10:39.037054 | orchestrator | 2026-03-19 05:10:39 - testbed-volume-7-node-4 2026-03-19 05:10:39.080882 | orchestrator | 2026-03-19 05:10:39 - disconnect routers 2026-03-19 05:10:39.212949 | orchestrator | 2026-03-19 05:10:39 - testbed 2026-03-19 05:10:40.285826 | orchestrator | 2026-03-19 05:10:40 - clean up subnets 2026-03-19 05:10:40.327872 | orchestrator | 2026-03-19 05:10:40 - subnet-testbed-management 2026-03-19 05:10:40.515496 | orchestrator | 2026-03-19 05:10:40 - clean up networks 2026-03-19 05:10:40.710632 | orchestrator | 2026-03-19 05:10:40 - net-testbed-management 2026-03-19 05:10:41.066178 | orchestrator | 2026-03-19 05:10:41 - clean up security groups 2026-03-19 05:10:41.124391 | orchestrator | 2026-03-19 05:10:41 - testbed-node 2026-03-19 05:10:41.249847 | orchestrator | 2026-03-19 05:10:41 - testbed-management 2026-03-19 05:10:41.389102 | orchestrator | 2026-03-19 05:10:41 - clean up floating ips 2026-03-19 05:10:41.421887 | orchestrator | 2026-03-19 05:10:41 - 81.163.193.56 2026-03-19 05:10:41.815739 | orchestrator | 2026-03-19 05:10:41 - clean up routers 2026-03-19 05:10:41.924468 | orchestrator | 2026-03-19 05:10:41 - testbed 2026-03-19 05:10:42.994821 | orchestrator | ok: Runtime: 0:00:22.619741 2026-03-19 05:10:42.999232 | 2026-03-19 05:10:42.999395 | PLAY RECAP 2026-03-19 05:10:42.999521 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-19 05:10:42.999583 | 2026-03-19 05:10:43.128565 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-19 05:10:43.132458 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-19 05:10:43.850871 | 2026-03-19 05:10:43.851038 | PLAY [Cleanup play] 2026-03-19 05:10:43.867705 | 2026-03-19 05:10:43.867865 | TASK [Set cloud fact (Zuul deployment)] 2026-03-19 05:10:43.928228 | orchestrator | ok 2026-03-19 05:10:43.938093 | 2026-03-19 05:10:43.938252 | TASK [Set cloud fact (local deployment)] 2026-03-19 05:10:43.962502 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:43.973829 | 2026-03-19 05:10:43.973953 | TASK [Clean the cloud environment] 2026-03-19 05:10:45.098215 | orchestrator | 2026-03-19 05:10:45 - clean up servers 2026-03-19 05:10:45.572496 | orchestrator | 2026-03-19 05:10:45 - clean up keypairs 2026-03-19 05:10:45.592485 | orchestrator | 2026-03-19 05:10:45 - wait for servers to be gone 2026-03-19 05:10:45.644139 | orchestrator | 2026-03-19 05:10:45 - clean up ports 2026-03-19 05:10:45.725671 | orchestrator | 2026-03-19 05:10:45 - clean up volumes 2026-03-19 05:10:45.791323 | orchestrator | 2026-03-19 05:10:45 - disconnect routers 2026-03-19 05:10:45.830572 | orchestrator | 2026-03-19 05:10:45 - clean up subnets 2026-03-19 05:10:45.851825 | orchestrator | 2026-03-19 05:10:45 - clean up networks 2026-03-19 05:10:46.462340 | orchestrator | 2026-03-19 05:10:46 - clean up security groups 2026-03-19 05:10:46.590442 | orchestrator | 2026-03-19 05:10:46 - clean up floating ips 2026-03-19 05:10:46.619503 | orchestrator | 2026-03-19 05:10:46 - clean up routers 2026-03-19 05:10:47.009893 | orchestrator | ok: Runtime: 0:00:01.896161 2026-03-19 05:10:47.013884 | 2026-03-19 05:10:47.014087 | PLAY RECAP 2026-03-19 05:10:47.014226 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-19 05:10:47.014294 | 2026-03-19 05:10:47.165000 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-19 05:10:47.166005 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-19 05:10:47.913116 | 2026-03-19 05:10:47.913270 | PLAY [Base post-fetch] 2026-03-19 05:10:47.929594 | 2026-03-19 05:10:47.929772 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-19 05:10:47.985266 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:47.999633 | 2026-03-19 05:10:47.999882 | TASK [fetch-output : Set log path for single node] 2026-03-19 05:10:48.048233 | orchestrator | ok 2026-03-19 05:10:48.057718 | 2026-03-19 05:10:48.057935 | LOOP [fetch-output : Ensure local output dirs] 2026-03-19 05:10:48.580914 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/logs" 2026-03-19 05:10:48.887547 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/artifacts" 2026-03-19 05:10:49.138000 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8fffc4fdde5e43cd9dfdb6b2ab020e89/work/docs" 2026-03-19 05:10:49.153125 | 2026-03-19 05:10:49.153349 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-19 05:10:50.082048 | orchestrator | changed: .d..t...... ./ 2026-03-19 05:10:50.082296 | orchestrator | changed: All items complete 2026-03-19 05:10:50.082329 | 2026-03-19 05:10:50.824156 | orchestrator | changed: .d..t...... ./ 2026-03-19 05:10:51.556896 | orchestrator | changed: .d..t...... ./ 2026-03-19 05:10:51.593764 | 2026-03-19 05:10:51.593939 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-19 05:10:51.631569 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:51.638150 | orchestrator | skipping: Conditional result was False 2026-03-19 05:10:51.658999 | 2026-03-19 05:10:51.659125 | PLAY RECAP 2026-03-19 05:10:51.659199 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-19 05:10:51.659236 | 2026-03-19 05:10:51.809685 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-19 05:10:51.812080 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-19 05:10:52.581260 | 2026-03-19 05:10:52.581420 | PLAY [Base post] 2026-03-19 05:10:52.596246 | 2026-03-19 05:10:52.596390 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-19 05:10:53.562623 | orchestrator | changed 2026-03-19 05:10:53.569998 | 2026-03-19 05:10:53.570112 | PLAY RECAP 2026-03-19 05:10:53.570175 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-19 05:10:53.570237 | 2026-03-19 05:10:53.694652 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-19 05:10:53.697225 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-19 05:10:54.487528 | 2026-03-19 05:10:54.487702 | PLAY [Base post-logs] 2026-03-19 05:10:54.498387 | 2026-03-19 05:10:54.498521 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-19 05:10:54.956879 | localhost | changed 2026-03-19 05:10:54.970485 | 2026-03-19 05:10:54.970673 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-19 05:10:55.008333 | localhost | ok 2026-03-19 05:10:55.013231 | 2026-03-19 05:10:55.013364 | TASK [Set zuul-log-path fact] 2026-03-19 05:10:55.041575 | localhost | ok 2026-03-19 05:10:55.058216 | 2026-03-19 05:10:55.058444 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-19 05:10:55.098131 | localhost | ok 2026-03-19 05:10:55.105976 | 2026-03-19 05:10:55.106169 | TASK [upload-logs : Create log directories] 2026-03-19 05:10:55.644849 | localhost | changed 2026-03-19 05:10:55.650415 | 2026-03-19 05:10:55.650533 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-19 05:10:56.137276 | localhost -> localhost | ok: Runtime: 0:00:00.004485 2026-03-19 05:10:56.142885 | 2026-03-19 05:10:56.143016 | TASK [upload-logs : Upload logs to log server] 2026-03-19 05:10:56.723906 | localhost | Output suppressed because no_log was given 2026-03-19 05:10:56.727513 | 2026-03-19 05:10:56.727706 | LOOP [upload-logs : Compress console log and json output] 2026-03-19 05:10:56.777898 | localhost | skipping: Conditional result was False 2026-03-19 05:10:56.782688 | localhost | skipping: Conditional result was False 2026-03-19 05:10:56.790476 | 2026-03-19 05:10:56.790677 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-19 05:10:56.836439 | localhost | skipping: Conditional result was False 2026-03-19 05:10:56.837319 | 2026-03-19 05:10:56.840558 | localhost | skipping: Conditional result was False 2026-03-19 05:10:56.853204 | 2026-03-19 05:10:56.853430 | LOOP [upload-logs : Upload console log and json output]